There is not that much information about the attribute gage (gauge) R&R readily available online (that I was able to find), so I thought I'd cover what I've done. First the basics, an attribute is something you can't quantify, generally a visual inspection- is this part "red" or something like that. While it generally seems simple, especially to the technical leads, operators can often get tripped up trying to pass or fail parts based on a description or a couple pictures. In the red example, can operators compare against the Pantone effectively and is this repeatable? You want to have your acceptance criteria and how you are proposing to test it with trained operators set up beforehand.
Your gage R&R should mimic what you do on the line and should be set up that way. If the operator does an inspection on the attribute before passing the part, then a final QC does that same inspection, you have two inspections and this should be part of your testing. You should present this as the entire package when possible. QA types tend to freak out when they hear you would accept a 10% possibility of passing a bad part, when in reality its 0.1%. Having the two (or more) inspections will be really helpful when you get to the acceptance criteria portion.
Test Method: You need to determine the number of operators, parts and trials. Trials is easy, just use three, everyone does, obviously more is better, but three is generally good enough and you don't want to be looking at parts all day. For operators, you need two, but if you have three or more lines or shifts, you can include those easily enough. The number of parts is where it gets tricky and you're going to have to make a judgment call. Generally medical device companies rely on some sort of confidence and reliability based on the severity or RPN of the potential failure, however, in the case of gage R&R everyone seems to follow auto industry guidelines which are usually a smaller quantity, 30 is generally defensible either way.
The Acceptance Criteria: The key one medical device companies are concerned with is the probability of a miss, which is defined as:
- Probability of a Miss = (# times a bad part was passed) / (# of opportunities) [i.e. number of inspections]
- Probability of a Miss (2 inspections) = (# of times a miss by one operator that were missed by another operator) / (# of opportunities)
Other Information: I recommend keeping it simple and only requiring a certain probability of a miss in your protocol (maybe effectiveness as well). You'll want the rest of the information you can collect documented, but that is a business decision, not a safety decision. You can cover:
- Effectiveness - (# of parts correctly identified) / (# of opportunities) [a low effectiveness indicates your process is probably not robust and will give you trouble over time, greater than 70% is generally acceptable]
- Probability of a false alarm = (# times a good part was rejected) / (# of opportunities) [waste]
- Repeatability = (# agreements) / (# parts inspected) [calculate per operator and total, if an operator has low repeatability, less than 80% or so, he or she needs retrained]
- Reproducibility = (# agreements among all operators) / (# parts inspected)
- Bias = (Probability of a false alarm) / (Probability of a miss) [calculate per operator and total]
The Test Setup: For this you'll need two experts in the attribute being inspected, the experts will sort out the good parts from the bad, label them in some fashion, and randomize them. In my experience you should have at least 25% bad parts, even though your process isn't likely to have 25% reject rate (hopefully). It is nice to include some very marginal parts, but those can be hard to find and agree on. Don't have the operators performing the test make the parts if you can avoid it.
The Test: You want to set it up so an operator makes a determination and someone else records it, don't let the operators know the sample being given to them, previous results, or talk amongst themselves. Do the testing on the line and try to keep the production pace. During a gage R&R I find the operators tend to err on the cautious side.
The Results: There you go, you can now say your test method is qualified and have the data to back it up. You can also make operator decisions based on the results, maybe move one around to catch things earlier in the process, which one is the go to person, etc. I find the attribute gage R&R easier to perform than the variable one, yet it is generally more important than a dimensional one from a safety perspective because there are more things I can't measure easily.