SaMD Software bug and issue tracking.

Icculus

Starting to get Involved
Hello - we're having some issues with an overly burdensome issue tracking system. We use an issue tracking system outputs many issues per week, including user errors and actual bugs. However, in the case of actual bugs in the software, regardless of how minor or major, I assume everything would need to be placed in the known anomalies list.

The problem is that this is overly burdensome to constantly be adding to the anomaly list, evaluating each item for risk etc., and then creating backlog items.

Are there any other approaches that people are using to handle bugs or errors discovered using software defect tracking tools?
 

yodon

Leader
Super Moderator
My initial reaction is that it sounds like your software development process needs to be improved! What do you (not) do for unit-level verification, integration testing, and software system testing that lets these bugs slip through? Where, at the earliest points in the process, can you eliminate some of these? Use the data you have to improve your software process and eliminate bugs before they are released! (And don't put the burden solely on the release test team! I expect that they are restricted to scripted testing?)

If it's a bug, it needs to be evaluated for risk so you can't brush that off.

If you're getting "user errors," don't brush those off, either. That may well be an indication of a poorly designed UI.

Why is adding to the anomaly list so burdensome? That should readily be generated by the tool you're using.
 

Icculus

Starting to get Involved
My initial reaction is that it sounds like your software development process needs to be improved! What do you (not) do for unit-level verification, integration testing, and software system testing that lets these bugs slip through? Where, at the earliest points in the process, can you eliminate some of these? Use the data you have to improve your software process and eliminate bugs before they are released! (And don't put the burden solely on the release test team! I expect that they are restricted to scripted testing?)

If it's a bug, it needs to be evaluated for risk so you can't brush that off.

If you're getting "user errors," don't brush those off, either. That may well be an indication of a poorly designed UI.

Why is adding to the anomaly list so burdensome? That should readily be generated by the tool you're using.

I'm not on the development side of things, but we use an error detection tool called Sentry, and those "sentries" are reported on a fairly regular basis (multiple times per week). These need to be analyzed individually to determine whether a user-error occurred (e.g. if the user attempts to input an unacceptable format, like an email address), or whether there's an actual bug in the software. This produces multiple bugs per week, which is becoming burdensome for the product team.

We currently don't have an automated processes that does this.

From your perspective, does every detected bug from a development tool such as "Sentry" need to go into the anomaly list? Or can there be some kind of criteria to determine what does/doesn't need to be included?
 

yodon

Leader
Super Moderator
Generally speaking and given the limited insight you've provided, I'd say no. If you have repeated findings for the same issue, just abstract the issue for the anomaly list. Might be helpful to maintain linkage for everything - especially for those that you consider duplicates (link back to the 'original' issue).
 
Top Bottom