[IMP] changes in strings and add assert

bzr revid: tpa@tinyerp.com-20111208053143-lapbqd5grvn4lu0t
This commit is contained in:
Turkesh Patel (Open ERP) 2011-12-08 11:01:43 +05:30
parent 8bd419b7b4
commit 0c0718d36f
1 changed files with 13 additions and 9 deletions

View File

@ -1,10 +1,10 @@
-
I set the "Employee Evaluation" in open state.
I set the "Employee Evaluation" survey in open state.
-
!python {model: survey}: |
self.survey_open(cr, uid, [ref("survey_2")], context)
-
I check that state of "Employee Evaluation" is Open.
I check that state of "Employee Evaluation" survey is Open.
-
!assert {model: survey, id: survey_2, severity: error, string: Survey should be in OPen state}:
- state == 'open'
@ -14,7 +14,7 @@
!python {model: hr_evaluation.evaluation}: |
self.button_plan_in_progress(cr, uid, [ref('hr_evaluation_evaluation_0')])
-
I check that state is open.
I check that state is Plan in progress.
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be in open state}:
- state == 'wait'
@ -22,11 +22,15 @@
I find a mistake on evaluation form. So I cancel the evaluation and again start it.
-
!python {model: hr_evaluation.evaluation}: |
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0') , context)
self.button_cancel(cr, uid, [ref('hr_evaluation_evaluation_0')])
assert evaluation.state == 'cancel', 'Evaluation should be in cancel state'
self.button_draft(cr, uid, [ref('hr_evaluation_evaluation_0')])
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0') , context)
assert evaluation.state == 'draft', 'Evaluation should be in draft state'
self.button_plan_in_progress(cr, uid, [ref('hr_evaluation_evaluation_0')])
-
I check that state is open.
I check that state is "Plan in progress".
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be in open state}:
- state == 'wait'
@ -44,7 +48,7 @@
evaluation_id: hr_evaluation_evaluation_0
request_id: survey_request_1
-
Give answer of the first page in "Employee Evaluation".
Give answer of the first page in "Employee Evaluation" survey.
-
!python {model: survey.question.wiz}: |
ctx = {'active_model':'hr.evaluation.interview', 'active_id': ref('evaluation_interview_0'), 'active_ids': [ref('evaluation_interview_0')], 'survey_id': ref("survey_2")}
@ -59,7 +63,7 @@
str(ref("survey_question_2")) +"_" +str(ref("survey_answer_98")) + "_multi" :'tpa review'
}, context = ctx)
-
I close this Evaluation by giving answer of questions.
I close this Evaluation survey by giving answer of questions.
-
!python {model: hr_evaluation.evaluation}: |
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0'))
@ -69,7 +73,7 @@
interview = interview_obj.browse(cr, uid, survey.id, context)
assert interview.state == "done", 'survey must be in done state'
-
I print the survey.
I print the evaluation.
-
!python {model: hr_evaluation.evaluation}: |
evaluation = self.browse(cr, uid, ref('hr_evaluation_evaluation_0'))
@ -80,7 +84,7 @@
!python {model: hr_evaluation.evaluation}: |
self.button_final_validation(cr, uid, [ref("hr_evaluation_evaluation_0")])
-
I check that state is "Final Validation".
I check that state is "Waiting Appreciation".
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0}:
- state == 'progress'
@ -97,7 +101,7 @@
-
I check that state of Evaluation is done.
-
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be in pending state}:
!assert {model: hr_evaluation.evaluation, id: hr_evaluation_evaluation_0, severity: error, string: Evaluation should be in done state}:
- state == 'done'
-
Print Evaluations Statistics Report