A brief history of modern digital forensics lays a foundation for asking whether “push-button forensics” has delivered on its promises, and whether it will continue to suffice as digital forensics continues to implement more sophisticated automation.
When the term “push button forensics” was coined 10+ years ago, it was a sarcastic pejorative. A lot of forensic examiners resisted vendors’ efforts to automate the acquisition and analysis of digital data. They believed automation would make it harder to document, validate, and thus defend their science in a court of law.
That started to change as hard disk drives reached the 1TB range and the iPhone led the way for a new breed of smartphone. Not only were storage sizes increasing; the number of storage media — phones, gaming devices, external drives, USB sticks, etc. — being seized climbed, too. Forensic labs ended up with backlogs, some as severe as several months. That delayed investigations, which could impact criminal defendants’ right to a speedy trial.
“The number of people available who can manually sort through the complex evidence isn’t keeping pace,” blogged David Kovar in late 2009 — the year the United States’ Great Recession tightened both public- and private-sector belts, limiting labs’ ability to hire and train more people.
At the time, Kovar and Dark Reading’s John Sawyer saw opportunity. Forensic practitioners could employ a two-tier system already in use in law and private investigation practices: give the automated work to junior associates, leaving the analysis and interpretation to the senior examiners.
Yet other factors were in play. “Doing more with less,” a recession-era theme, would persist throughout the next decade — even as technology trends accelerated. By now most examiners agree that “push-button forensics” has delivered on its promises of efficiency, but whether it’s preserved scientific principles is another question entirely.