Just to check that I'm on the right track documentation-wise (because what's the point of convincing them if we're on the wrong track anyway)..
my plan is for the engineers to develop their project plan (incl. a set of requirements) and then to develop more detailed requirements documents for each 'task' they have. They then generate their design and test cases which must 'map' to the requirements (i.e., inputs match outputs). I've said this is reiterative in the sense that they may not know all their requirements/design so they write out what they can, design what they can, have a go coding and see what happens then go back and change the requirements/design as appropriate. That said, the requirements and design need approval before commencing and as changes are made (with these changes tracked). They need to verify each time they code (unit tests - when they're ready for it) and their own policy is to never check in broken code (i.e., it passes their unit tests). There is then verification with automatic builds and then partial validation periodically (maybe once per week) to check they're on track. When ready for release they have to pass the verification (Automatic builds) and then full validation (cos we don't have time for this with every weekly build). Amongst this they have weekly meetings where they assign tickets (their 'tasks' for the week - also documented in their ticketing system with responsibilities, requirements etc) and they'll have to have a review at appropriate points (to check the rqmts/design/tests are on track?).
So the records/docs are the plan, requirements, design docs, test cases, approval and tracked changes for each of these and records of the weekly meetings/reviews and all tests conducted. It's probably useful to have a checklist/template for the plan, requirements, design and meetings so it's easy for them to just check them off and make sure they've not missed anything.
Does that sound too cumbersome? We are getting a software program allows them to enter each requirement, and their test cases and assigned tasks map to each requirement as they enter them in the software. The design will either be done in an individual document (attached to the software) or be broken down and slotted in with the requirements. Test cases are already mapped to each requirement as they're entered and you can single-source some test case steps so they don't have to be written out multiple times. The software keeps records of tests (and results) conducted, and tracks all changes for them. I was thinking approval would have to be manually done but could we just have a meeting that checks requirements and says it's approved? (Rather than having to 'approve' every individual requirement). This program also provides reports on progress and what's been tested.
Sorry if this is not the place to be asking -- it's a flow-on from my thread but a slightly different question.
Thanks so much for your help!