[MERGE] hr_evaluation: clean yml tests

bzr revid: rco@openerp.com-20111221103428-3ugyq0yr3m7dopvm
This commit is contained in:
Raphael Collet 2011-12-21 11:34:28 +01:00
commit bac17b7cf6
4 changed files with 86 additions and 131 deletions

View File

@ -50,7 +50,10 @@ in the form of pdf file. Implements a dashboard for My Current Evaluations
'hr_evaluation_data.xml',
'hr_evaluation_installer.xml',
],
"test": ["test/test_hr_evaluation.yml"],
"test": [
"test/test_hr_evaluation.yml",
"test/hr_evalution_demo.yml",
],
"active": False,
"installable": True,
"certificate" : "00883207679172998429",

View File

@ -14,5 +14,11 @@
<field name="evaluation_plan_id" ref="hr_evaluation_plan_managersevaluationplan0"/>
</record>
<record id="hr_evaluation_evaluation_0" model="hr_evaluation.evaluation">
<field name="date">2011-12-24</field>
<field name="employee_id" ref="hr.employee1"/>
<field name="plan_id" ref="hr_evaluation.hr_evaluation_plan_managersevaluationplan0"/>
</record>
</data>
</openerp>

View File

@ -0,0 +1,6 @@
-
!record {model: hr.employee, id: hr.employee1, view: False}:
evaluation_plan_id: hr_evaluation_plan_managersevaluationplan0
-
!record {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, view: False}:
plan_id: hr_evaluation.hr_evaluation_plan_managersevaluationplan0

View File

@ -1,144 +1,73 @@
-
In order to test hr_evaluation module for OpenERP, I will create plan then create evaluation under that plan.
-
I create new Department.
-
!record {model: hr.department, id: hr_department_rd0}:
manager_id: base.user_root
name: 'R & D'
-
I create a new employee.
-
!record {model: hr.employee, id: hr_employee_employee0}:
address_home_id: base.res_partner_address_1
company_id: base.main_company
gender: male
marital: single
name: Mark Johnson
user_id: base.user_root
department_id: 'hr_department_rd0'
-
I create another new employee and assign first one as it's Manager.
-
!record {model: hr.employee, id: hr_employee_employee1}:
address_home_id: base.res_partner_address_3000
company_id: base.main_company
gender: male
name: Phil Graves
user_id: base.user_demo
parent_id: 'hr_employee_employee0'
-
I Create an "Employee Evaluation" survey for Manager's Evaluation Plan.
-
!record {model: 'survey', id: survey_0}:
title: 'Employee Evaluation'
max_response_limit: 20
response_user: 2
-
I Create an "Employee Evaluation" page in "Employee Evaluation" survey.
-
!record {model: 'survey.page', id: survey_employee_page_0}:
title: 'Employee Evaluation'
survey_id: survey_0
-
I Create "What is your Name" question in "Employee Evaluation" survey page.
-
!record {model: 'survey.question', id: survey_p_question_0}:
question: 'What is your Name?'
type: 'single_textbox'
sequence: 1
page_id: survey_employee_page_0
-
I Create "What is your gender" Question in "Employee Evaluation" survey page.
-
!record {model: 'survey.question', id: survey_p_question_1}:
question: 'What is your gender?'
type: multiple_choice_only_one_ans
sequence: 2
is_require_answer: true
page_id: survey_employee_page_0
-
I Create "Male" answer in question "What is your gender?"
-
!record {model: 'survey.answer', id: survey_p_1_1}:
answer: 'Male'
sequence: 1
question_id : survey_p_question_1
-
I Create "Female" answer in question "What is your gender?"
-
!record {model: 'survey.answer', id: survey_p_1_2}:
answer: 'Female'
sequence: 2
question_id : survey_p_question_1
-
I set the survey in open state.
I set the "Employee Evaluation" survey in open state.
-
!python {model: survey}: |
self.survey_open(cr, uid, [ref("survey_0")], context)
self.survey_open(cr, uid, [ref("survey_2")], context)
-
I create an Evaluation plan and select "Employee Evaluation" survey for "Send to Subordinates" and "Final interview with Manager" Phase.
I check that state of "Employee Evaluation" survey is Open.
-
!record {model: hr_evaluation.plan, id: hr_evaluation_plan_managersplan0}:
company_id: base.main_company
month_first: 3
month_next: 6
name: Manager's Plan
phase_ids:
- action: bottom-up
name: Send to Subordinates
survey_id: 'survey_0'
- action: top-down
name: Final Interview with manager
sequence: 2
survey_id: 'survey_0'
-
I assign the evaluation plan to the employee "Mark Johnson".
-
!python {model: hr.employee}: |
res = self.onchange_evaluation_plan_id(cr, uid, [ref('hr_employee_employee0')], ref('hr_evaluation_plan_managersplan0'), False, None)
values = dict([('evaluation_plan_id', ref('hr_evaluation_plan_managersplan0'))] + res['value'].items())
self.write(cr, uid, [ref('hr_employee_employee0')], values, None)
-
I create an Evaluation for employee under "Manager Evaluation Plan".
-
!record {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0}:
date: !eval time.strftime('%Y-%m-%d')
employee_id: 'hr_employee_employee1'
plan_id: 'hr_evaluation_plan_managersplan0'
progress: 0.0
state: draft
-
I change the employee on Evaluation.
-
!python {model: hr_evaluation.evaluation}: |
res = self.onchange_employee_id(cr, uid, [ref('hr_evaluation_evaluation_0')], ref('hr_employee_employee0'), None)
values = dict([('employee_id', ref('hr_employee_employee0'))] + res['value'].items())
self.write(cr, uid, [ref('hr_evaluation_evaluation_0')], values, None)
!assert {model: survey, id: survey_2, severity: error, string: Survey should be in OPen state}:
- state == 'open'
-
I start the evaluation process by click on "Start Evaluation" button.
-
!python {model: hr_evaluation.evaluation}: |
self.button_plan_in_progress(cr, uid, [ref('hr_evaluation_evaluation_0')])
-
I check that state is "Plan in progress".
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be 'Plan in progress' state}:
- state == 'wait'
-
I find a mistake on evaluation form. So I cancel the evaluation and again start it.
-
!python {model: hr_evaluation.evaluation}: |
self.button_cancel(cr, uid, [ref('hr_evaluation_evaluation_0')])
self.button_draft(cr, uid, [ref('hr_evaluation_evaluation_0')])
self.button_plan_in_progress(cr, uid, [ref('hr_evaluation_evaluation_0')])
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0') , context)
self.button_cancel(cr, uid, [ref('hr_evaluation_evaluation_0')])
assert evaluation.state == 'cancel', 'Evaluation should be in cancel state'
self.button_draft(cr, uid, [ref('hr_evaluation_evaluation_0')])
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0') , context)
assert evaluation.state == 'draft', 'Evaluation should be in draft state'
self.button_plan_in_progress(cr, uid, [ref('hr_evaluation_evaluation_0')])
-
I close this survey request by giving answer of survey question.
I check that state is "Plan in progress" and "Interview Request" record is created
-
!python {model: hr_evaluation.evaluation}: |
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0'))
self.pool.get('hr.evaluation.interview').survey_req_done(cr, uid, [r.id for r in evaluation.survey_request_ids])
interview_obj = self.pool.get('hr.evaluation.interview')
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0') , context)
assert evaluation.state == 'wait', "Evaluation should be 'Plan in progress' state"
interview_ids = interview_obj.search(cr, uid, [('evaluation_id','=', ref('hr_evaluation_evaluation_0'))])
assert len(interview_ids), "Interview evaluation survey not created"
-
I print the survey.
Give answer of the first page in "Employee Evaluation" survey.
-
!python {model: survey.question.wiz}: |
name_wiz_obj=self.pool.get('survey.name.wiz')
interview_obj = self.pool.get('hr.evaluation.interview')
interview_ids = interview_obj.search(cr, uid, [('evaluation_id','=', ref('hr_evaluation_evaluation_0'))])
assert len(interview_ids), "Interview evaluation survey not created"
ctx = {'active_model':'hr.evaluation.interview', 'active_id': interview_ids[0], 'active_ids': [interview_ids], 'survey_id': ref("survey_2")}
name_id = name_wiz_obj.create(cr, uid, {'survey_id': ref("survey_2")})
ctx ["sur_name_id"] = name_id
self.create(cr, uid, {str(ref("survey_question_2")) +"_" +str(ref("survey_answer_1")) + "_multi" :'tpa',
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_10")) + "_multi" :'application eng',
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_20")) + "_multi" :'3',
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_25")) + "_multi" :'2011-12-02 16:42:00',
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_43")) + "_multi" :'HR',
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_98")) + "_multi" :'tpa review'
}, context = ctx)
-
I close this Evaluation survey by giving answer of questions.
-
!python {model: hr_evaluation.evaluation}: |
interview_obj = self.pool.get('hr.evaluation.interview')
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0'))
interview_obj.survey_req_done(cr, uid, [r.id for r in evaluation.survey_request_ids])
for survey in evaluation.survey_request_ids:
interview = interview_obj.browse(cr, uid, survey.id, context)
assert interview.state == "done", 'survey must be in done state'
-
I print the evaluation.
-
!python {model: hr_evaluation.evaluation}: |
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0'))
@ -147,22 +76,33 @@
I click on "Final Validation" button to finalise evaluation.
-
!python {model: hr_evaluation.evaluation}: |
self.button_final_validation(cr, uid, [ref("hr_evaluation.hr_evaluation_evaluation_0")],
{"active_ids": [ref("hr_evaluation.menu_open_view_hr_evaluation_tree")]})
self.button_final_validation(cr, uid, [ref("hr_evaluation_evaluation_0")])
-
I check that state is "Final Validation".
I check that state is "Waiting Appreciation".
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0}:
- state == 'progress'
-
Give Rating "Meet expectations" by selecting overall Rating.
-
!record {model: hr_evaluation.evaluation, id: hr_evaluation.hr_evaluation_evaluation_0}:
!record {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0}:
rating: '2'
-
I close this Evaluation by click on "Done" button of this wizard.
-
!python {model: hr_evaluation.evaluation}: |
self.button_done(cr, uid, [ref("hr_evaluation.hr_evaluation_evaluation_0")], {"active_ids": [ref("hr_evaluation.menu_open_view_hr_evaluation_tree")]})
self.button_done(cr, uid, [ref("hr_evaluation_evaluation_0")])
-
I check that state of Evaluation is done.
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be in done state}:
- state == 'done'
-
Print Evaluations Statistics Report
-
!python {model: hr.evaluation.report}: |
import netsvc, tools, os, time
ctx={}
data_dict={'state': 'done', 'rating': 2, 'employee_id': ref("hr.employee1")}
from tools import test_reports
test_reports.try_report_action(cr, uid, 'hr_evaluation_evaluation_0',wiz_data=data_dict, context=ctx, our_module='hr_evaluation')