The more I do performance standards training for agency managers, supervisors and employees the more I find myself in agreement with fellow FedSmith.com author Robbie Kunreuther, who has been railing against the current performance management approach by the Federal government for several years. My guess is that Robbie was not the least bit surprised to read the GovExec.com article (August 6, 2008) by Robert Brodsky and Elizabeth Newell titled “Former employees say Defense audit agency is ‘broken.'”
In response to a Government Accountability Office (GAO) report which found that Defense Contract Audit Agency (DCAA) supervisors improperly influenced audits, resulting in findings more favorable to a large Federal contractor, a number of former DCAA employees opined that the agency had been in serious decline for years. They “placed much of the blame on top-level managers…for developing a culture beholden to job performance metrics rather than taxpayers,” and for “DCAA’s fixation with performance requirements and audit deadlines that they said are not tailored to the size or complexity of the project.”
They described “an environment where supervisors got upset when auditors used the wrong font in their reports or made spelling errors, but appeared unconcerned that serious overbilling mistakes may have slipped through the cracks. And they recalled times when incomplete audits were pushed out the door by managers more concerned with meeting internal quotas and timetables than by the quality of their work…”
“‘The system assigns auditors specific responsibilities and provides them with a set amount of time — usually 30 days — to complete them. It also measures progress against a multitude of very detailed and specific metrics…'”
“The problems, according to one 25-year veteran of the agency, can be traced back to the Defense Management Information System, a tool for tracking the status of ongoing audits.”
“The system assigns auditors specific responsibilities and provides them with a set amount of time — usually 30 days — to complete them. It also measures progress against a multitude of very detailed and specific metrics. Staff members at DCAA headquarters in Fort Belvoir, Va., track the output and compare results across regional offices.”
“‘In my opinion, the end result was a massive bloated, soulless bureaucracy that totally lost touch with the taxpayer,’ a 25-year-employee said, adding that the pressure to close out jobs and produce clean metrics — or green lights in the stoplight-style measurement system — was intense and often distracted from efforts to question contractor costs."
"’In the end, defense contractors big and small are getting away with murder because they know we at DCAA are slaves to the metrics,’ the former employee said.”
The article noted that the “GAO report also cited cases of intimidation. Supervisors in the California region threatened agency auditors with personnel action if they did not change reports to favor large contractors, GAO said. Unsupervised trainees allegedly were responsible for handling complex multimillion-dollar audits, leading to major mistakes. And, auditors who agreed to speak with GAO investigators reportedly were subject to harassment from managers.”
Robbie Kunreuther didn’t specifically predict the decline and fall of the DCAA, as detailed by some of its former employees, but, in his March 11, 2008, FedSmith.com article titled “Goals, Objectives, and the Everyday Employee,” he questioned the use of metrics as a means for measuring employee performance, opining that “Human resources folks know this is nonsense. How is a Staffing Specialist supposed to demonstrate commitment to the Air Force’s goals and objectives? Hers is a day-in/day-out job. There are vacancies, announcements, applications, selections, etc. If she’s been working in HR for 15 years, don’t you think she already understands how her job is connected to your agency’s mission?
“Give her metrics if you want. Tie her tightly to the Government Performance and Results Act (GPRA). Enroll her in your latest pay-for-performance system. Be sure to burden her supervisor…to keep counts and scorecards. In the end, however, if she’s like most of you reading this article (from the lowest to the loftiest) she’ll do her job as best she knows how.”
I have some experience with what Robbie was talking about here. As Regional Director of Personnel for one Federal agency, I was responsible for managing all of the traditional personnel management functions, including Staffing. The Staffing Specialists in my office worked very hard, were technically proficient, and had a strong customer service orientation. One of their major responsibilities was to announce vacancies and provide selecting officials with a certificate of eligibles. There may be agencies in which managers and supervisors are completely satisfied with the speed of the recruitment process, but I never worked for one. Aside from the usual complaints about how long it took to fill vacancies, though, we thought we were doing pretty well, until top agency management introduced a measuring device titled “How do you stack up?”
The intent of this initiative was to compare the performance of each region against that of all other regions in a wide variety of operational and administrative support functions. When the first several reports came out, we were near the bottom in the timeliness of service measures for Staffing and Position Classification. I was surprised to learn that so many regions were apparently more efficient than we were, so I called several of my colleagues in an effort to learn what they had done to improve timeliness. In virtually every case, they said that keeping the personnel actions out of their logs for as long as possible was the key to their success. So, for example, if a recruit action came in with anything missing or any mistakes, they promptly shipped it back to the originating office, thus taking it off their books until the action had been perfected and returned.
Our practice had been to work informally with originating offices and do pen-and-ink and similar changes to incomplete or erroneous requests for personnel action, typically resolving the matter by phone rather than sending an action back. However, as we continued to hover close to the bottom of the regional rankings, our Regional Administrator made it clear that he was not happy. My boss, the Assistant Regional Administrator for Administration, quickly surmised that it would be better for him, for me, and for my staff if we moved up the regional ratings latter very rapidly. So, as other regions had told me they were doing, we started sending actions back to the originating office if they weren’t letter-perfect, thus stopping our clock. Sure enough, we rose in the rankings to the extent that we were consistently at or near the top. Had we improved customer service? Hardly. What we had done was learn to play the game, at the expense of our customers.
Robbie noted in his March 10 article that W. Edwards Deming, the father of total quality management, had “advised us in the 1970s – 80s that individual metrics don’t teach us how to do our jobs better. They teach us how to keep up with a bean count.”
My example above seems to fit Robbie’s theory very well. We disregarded teamwork (in terms of working cooperatively with the offices we serviced, as well as with other regions, for the benefit of the agency as a whole) and made life more difficult for our customers. As Robbie speculated, the individual metrics did serve to undermine the very mission they were designed to support. Now, we actually looked for opportunities to send actions back to the originating offices so we would look better against the other regions. Our focus, in handling requests for personnel action, shifted from constantly searching for ways to better serve our customers to how we could best protect ourselves from looking bad against the “How do you stack up?” reports.
There is an old management saw that goes “What gets measured gets managed,” and I think there is a great deal of truth to it. In the DCAA example, a number of managers and supervisors clearly elected to track auditors in terms of such minutiae as the font they used in reports and the typographical errors they made rather than on the quality and accuracy of their audits. Whether the managers and supervisors did that in a good-faith effort to comply with the Government Performance Results Act, the President’s Management Agenda, and Office of Personnel Management and agency guidance, or for more nefarious reasons – and I will admit here that I have grown steadily more cynical about my former employer, the Federal government, not of the overwhelming majority of competent, dedicated and well-meaning managers, supervisors and employees, but of some of the political appointees who set agency policy.
Whatever the reasons, I have talked to many managers and supervisors who say they spend so much time trying to figure out what metrics to apply to their subordinates and how to measure them (and then in tracking those measures) that they have little time left to see what their employees are actually doing. When auditors have to be more concerned about the font they use on their reports or their typographical errors than the quality and accuracy of their audits, something is desperately wrong with the performance management system.
I suspect that the DCAA revelations are just the tip of the iceberg and that many other agencies are also “slaves to the metrics.” I hope the next administration will re-examine the Federal government’s approach to measuring individual performance, with the goal of ensuring that every employee’s focus is on the critical work they were hired to perform, not on extraneous minutiae that actually keeps them from accomplishing that work.