Software Requirements

Should A Business Analyst Know How To Build Robust Project Schedules?

The requirements life cycle is directly tied to the project life cycle, there really isn't any way around that reality. If a project is not planned well, you can expect the impact to be felt in requirements gathering, requirements analysis, requirements validation and requirement change management. This could be one of the reasons why you see a significant rise in job opportunities that are titled Business Analyst / Project Manager. 

Came across this amazing bike sculpture on the Green River Trail! Took the opportunity for a "selfie" that looks like I'm riding with them!

Although many successful business analysts do not know their way around project scheduling tools such as Microsoft Project, I find that skill set very useful. In one project I worked on, the project manager built a MS Project schedule that clearly missed the mark in the requirement management area. That entire section literally didn't have one requirement task that was listed as a predecessor to downstream tasks or milestones. This was a circumstance in which I had to fight for my requirements schedule without being confrontational or combative. 

I believe that every business analyst should have a requirements management plan and work to have it submitted into the project documentation portfolio. That plan may or may not include a schedule, my strong recommendation is that it should. In the situation described above, I did the following:

- built a MS Project schedule for the requirement life cycle tasks
- rebuilt the project managers project schedule to include the above content
- setup each requirement task as a predecessor task where appropriate
- estimated the duration of requirement tasks with a goal to align with overall schedule objectives
- used color highlighting to show the critical path tasks as they pertained to requirements
- scheduled a workshop with the project manager to go over the content
- used a structured walkthrough method to go over my recommendations

I was very careful in my approach during the one-on-one workshop with the project manager. I had a high level of respect for her and didn’t want her to take my feedback as a challenge. Project managers have a tough job and my goal was to be a valuable resource for the project. Although I obviously felt a strong commitment to my feedback, I was prepared to defer and commit to the chosen path as a project professional.

This project manager was and continues to be an exceptional PM. She did incorporate many of my recommendations but that wasn't the most important thing she did. She scheduled another workshop with key team members and we went over it again. This opened up a valuable discussion and the team bonded a bit tighter that day. The schedules I created became a great way for the team as a whole to understand the requirements lifecycle and it paid valuable dividends as we marched towards a successful project completion. 

This is just one story and I hope it may have provided some insights for you. I welcome any comments you may have both positive and negative. 


© 2016 Dwayne Wright

Use Case Preconditions And How They Are Leveraged

When the BA is eliciting requirements that will end up in a use case, it is important they uncover and document the preconditions for tasks. In many cases, the stakeholder will not think about the preconditions when talking about the tasks they perform. This happens because they are so close to the topic, they assume such information is obvious and goes without saying.

If someone is focused solely upon the business needs of a project, they may think that the preconditions section of a use case is one of the least valuable components of the use case. They may consider it nice to have but ultimately its purpose is to add a little bit of flavor to the overall narrative. 

Yeah, it is a wedding reception picture. Normally, that many people don't finger point at me this much in my personal life!

If you are a developer or a tester, the preconditions take on an entirely different importance. In many cases, it is the first thing they look at in the use case. The precondition statement describes a system state that must be satisfied before the use case can execute. This means that the code the developer writes will not execute and the tester cannot perform a reliable test, unless the established preconditions are met.

Job shadowing can be a great way to discover missing preconditions for tasks and a solid way to validate use cases. Job shadowing involves participating in the workday of the stakeholder, user or target user group. This participation allows a front row seat to the operational environment when tasks are performed and the challenges that might need to be overcome to ensure the task(s) is completed successfully. 

It should also be noted that the precondition information is not always a straight list of checks in a descending order of importance. All the precondition statements are important unless the narrative says otherwise. In some cases, the precondition can be one of more options. In this case, it is important to call out the OR statement for the developers and testers. There is a huge difference in their world between "this AND that" and "this OR that". 

Additionally, it may turn out to be a grave mistake to list a precondition that is often but not necessarily needed for the task. This mistake will often go unnoticed until sometime after the deployment of the solution.  Then it becomes a change order that may or may not have the same level of diligence in determining the impact of the requested change.

Finally, there is a gap in the precondition component for most use cases. The way they are crafted, it is assumed the use case can be ignored if the preconditions are not met. The potential problem is that this use case might be a precondition of a much larger system. In these circumstances, the BA needs to make sure they communicate what should happen if a precondition is NOT met.

© 2016 Dwayne Wright

Use Case Size, How Much Is Just Right?

Nice to have a daily view of the Seahawks stadium during the work day!

There is a threshold for determining if a use case is:

- too small (therefore not really needed)
- too small (probably missing something)
- too small (main point should be integrated into another case)
- too large (needs editing)
- too large (needs decomposition)
- or that the size is just right

and often that determination is made strictly based upon step count in the main flow. I've seen some analysts claim that the step count should fall between 10 and 15. Although this isn't a bad place to start, it is not just the amount of steps because other factors might come into play such as ...

- the complexity of the steps
- the number of actor / system interactions
- the audience that will be consuming the information

I'll even take this a bit further because the main flow steps in the use case really shouldn't overshadow the rest of the information in that use case. The steps in the use case often are the "greatest among equals" in regards its components but the entire use case length is the real concern.

I have a golden rule, well it is more of a golden guideline. Overall, the use case should fit on one page in MS Word. In fact, I tend to create a page break for each use case and this isn't by accident. If the use case doesn't mostly fill the page, its value in the package should be challenged. By the same measuring stick, if the use case is more than one page, it should be challenged in regards to decomposition opportunities. 

I’ll admit that this guideline was born out of how I hated seeing a use case span two pages. To me, this was a potential thought breaker and I felt that contiguous thought is a paramount concern in use case communication. I admit, this is a bit goofy but it does seem to work for me. As always, I would love to see your thoughts on the topic.

© 2016 Dwayne Wright

Struggling With Business Rule Discovery - Jump In Use Case Design

In software development projects, business rules are either a constraint (legal, regulatory or organization policy) or a procedure traceable to some other operation. Remember, software systems are much like an environmental ecosystem. Any change can have affects on upstream or downstream operations in ways that are best discovered before the system goes live. 

A professional business analyst can and will perform classic elicitation methods to discover business rules and to validate them. However, business rules can be in the realm of tribal knowledge. Meaning that this critical information is often undocumented by the workgroup but used so often that they assume everyone else knows about it. Interview and workshop methods may miss critical business rules and the business analyst might not have enough time to employ observation techniques. 

Even if the BA is employing observation techniques for both requirement and business rule discovery, using use case design is an excellent method to facilitate the process. Use cases are a great way to find information dust bunnies. Sorry, I seem to be really into analogies in this piece. Dust bunnies is a term for the small clumps of dust & hair found under furniture and in seldom visited corners of a room. 

Just like how actual dust bunnies are harmful to electronics, these information dust bunnies can cause real harm if they are not taken care of before software deployment. Use cases are like small virtual deployments, it uses imagination to foresee how a new system will impact a process, actors and associated subsystems. 

© 2016 Dwayne Wright

MoSCoW Technique And The Requirements Walkthrough

If you have studied project management or business analysis, you have likely heard of the MoSCoW technique. It is a classification method used primarily for prioritization or scope management. The idea is to rate the item under scrutiny (like a requirement), the breakdown is as follows: 

M (must have)
S (should have)
C (could have)
W (won't have).

I’ve seen analysts send out requirement lists, ask users to provide ratings and then send the list back to the analyst. Although this can work to help define requirements, there is a significant missed opportunity in that approach.

A requirements walkthrough (sometimes called a structured walkthrough) is a formal meeting where key stakeholders evaluate each requirement one at a time and make a determination about the requirement. Although it has its roots in validation, it is more of a method to facilitate a strong shared understanding of each requirement. 

The MoSCoW technique and requirement walkthrough are a nice combination and can greatly enhance requirement packages regardless if you use a RTM, user stories, KPIs or even a specification list. When one stakeholder classifies a requirement at either extreme (M or W), another stakeholder in the meeting may challenge that classification. What happens next is the most valuable aspect, they talk about it openly where other stakeholders can hear it. Even if both stakeholders never agree, the shared perspectives are now out in the open and can be addressed as needed. 

The combination of MoSCoW and requirement walkthroughs offer more value that a traditional "thumbs up" or "thumbs down" evaluation of requirements. It greatly reduces the chance that someone will "rubber stamp" requirements due to disinterest, political pressure or misconceptions about the requirement itself.

© 2016 Dwayne Wright

Data Dictionary: Lean Is A Reasonable Objective

How much time is wasted in a project by data specific misunderstandings? No one really knows because data misunderstandings and their impact is extremely difficult to track. A data dictionary can certainly help but it is not the same as a bag of magical pixie dust. They take time to create, strategies to get them successfully leveraged by teams (that are unaccustomed to using them) and require maintenance in order to stay relevant. 

A data dictionary is a table that describes data and an analyst can exert quite a bit of freedom in how they are created. The most classic representation is a name, description, composition and associated values. One problem I've seen in data dictionaries in the past is that someone adopted a "fill in the template" approach. In one case, someone duplicated whole data sets, changed the names and then submitted the data dictionary as a completed deliverable. A classic example where the importance of submitting a deliverable was regarded higher than completing a high quality deliverable. 

NAME: Compliance Inspector
DESCRIPTION: One of more individuals that perform compliance inspections, documents the results and escalates violations in the appropriate manner. 
COMPOSITION: Employee ID, Employee Name, Office Location, Phone, Email and Department ID.
VALUES: Mandatory: Employee ID, Employee Name

In this example, we are describing an entity that will be responsible for entering data into the system for a specific record type, such as a compliance audit. One thing about the example might jump out to you, if you are into lean data sets. Aside from the name and description, there doesn't appear to be anything unique about this data element from the entering data standpoint. 

NAME: Compliance Inspector
DESCRIPTION: One of more individuals that perform compliance inspections, documents the results and escalates violations in the appropriate manner. 
COMPOSITION: Employee ID, Employee Name, Certification ID, Certification Renewal Date, Office Location, Phone, Email and Department ID.
VALUES: Mandatory: Employee ID, Employee Name

In the first example, we could have dozens of identified entities but the system wouldn't do anything unique to them. With the addition of the Certificate ID and Certification Renewal Date, we do have very unique elements for this entity. So we could have a General Data Entry entity in the data dictionary that would be used the majority of the time to meet system requirements. All the other similar but unique entities would appear thereafter and support the associated business rules.

© 2016 Dwayne Wright

Bad Requirement: It Must Be Good

One clear example of a bad software development requirement is the appearance of the word "good" within it. It will generally appear in the format of "the ____ must have a good ____". What is good vs. what is bad? Is coffee good for you? Does coffee make you smarter? How much intake of coffee is good, how much is bad and what is the relative impact between? How much coffee is enough to reach the threshold of good results?

In many cases, saying something should be good is an over simplification and doesn't provide enough granularity to be actionable by the individual that depends on the requirement to guide development. How about the age old debate between "good vs. evil". If you are a good dancer, does that also make you evil if a person believes that dancing is a sin? What about if you are alone and occasionally dance in the dark like the classic Bruce Springsteen song?

The point is that software developers and testers will have a difficult time making sure that a software deliverable meets a requirement that focuses upon being "good". The best they can do is make a subjective evaluation that if it appears to be "bad", then it likely would fail to be "good". 

EXAMPLE: The note taking portion of the application must be good in order to facilitate a proper determination on the outcome of the recorded case. 

In the above example, I would go directly to the source of the requirement and ask some clarifying questions about the note taking needs for recorded cases. This would involve both the ability to create notes and the ability to reference them when needed. Taking this forward, then we might be looking at some requirement refinement such as noted below. 

  • The user shall have the ability to log and view notes alongside case records.
  • The user shall have the ability to log and view notes in a separate window to the case being examined.
  • The users notes shall always be directly attached to the case being viewed when the note was created.
  • The system shall display a visual indication to the user if the case has attached notes.

All of the above requirements can be tested both from a functional and a usability perspective. When validating these requirements, the BA can ask the requirement source of there are any other characteristics they would associate with a "good" note taking experience for logged cases. 

© 2016 Dwayne Wright