Naive Criminals: How Ordinary People Hide Digital Evidence
Most criminals are far from professionals, so the methods they use to hide data are often naive. Yet, these methods often work simply because investigators may not have the time or expertise to conduct a thorough analysis. So, what tricks do the average John and Mary use to hide evidence? The list is short, and the tricks themselves are quite simple.
One day, I was setting up a nested crypto-container protected by a hardware crypto key, trying to figure out how to access the information if the key was removed from the computer. The answer became increasingly clear: you can’t. At that moment, I was asked to comment on a document—essentially a police expert’s guide to finding digital evidence. The document impressed me so much that, without asking the authors for official permission, I decided to write an article based on it. The nested crypto-containers and hardware keys were set aside, and I began studying the methods that, according to police, most suspects use to cover their tracks and hide evidence.
Moving Data to Another Folder
No, we’re not talking about hiding a collection of pornographic images in a folder called “Engineering Coursework.” We’re talking about a naive but often effective way to hide evidence by simply moving certain data to another location on the disk.
Is this obvious stupidity or extreme innocence? As naive as it may seem, this method can work for rare and exotic data—like jump lists (do you know what those are?) or a WhatsApp messenger database. An expert may not have the time or motivation to search the entire computer for a database from… well, from what exactly? There are hundreds, if not thousands, of instant messaging apps; good luck figuring out which one the suspect used. Each app has its own file paths, names, and even database formats. Manually searching a computer with tens of thousands of folders and hundreds of thousands of files is a hopeless task.
Obviously, this method only works if the crime isn’t related to information security. The investigator may not question the contents of the suspect’s hard drive, and the analysis becomes a formality. In such cases—and only such cases—this “Elusive Joe” method might work.
Using “Secure” Communication Methods
This point especially impressed me. Modern criminals, surprisingly, are fairly knowledgeable about security. They understand how chat messages are transmitted, where they’re stored, and how to delete them.
The document details the ways investigators can still access chats. These include examining freelist areas in SQLite databases, official requests to messenger providers (for example, Microsoft will hand over Skype chat logs without much fuss since they’re stored on company servers), and even requests to smartphone manufacturers (like the case where BlackBerry helped Canadian police track down a drug gang using the company’s messenger on the old BlackBerry OS).
This reminds me of a funny story I heard at a police event. American police arrested a man suspected of large-scale drug trafficking. The suspect was tech-savvy and used Apple iMessage as his only communication method. iMessage messages aren’t stored on Apple’s servers, and the suspect carefully deleted his chat history after each session. His iPhone backup contained nothing interesting.
However, when police examined his computer (he had a Mac), they were thrilled: they found hundreds of thousands of messages the suspect didn’t even know existed. The key was Apple’s then-new Continuity system, which synchronized iMessage messages across all devices registered to the same account.
Interestingly, the speaker complained that all existing iMessage database analysis programs at the time couldn’t handle that many messages and simply crashed; the police had to write their own utility to parse the bloated database.
The moral? There isn’t one: if you’re not an IT specialist, you simply can’t know about these things.
Renaming Files
Another naive attempt to hide evidence is renaming files. As simple as it sounds, renaming, say, an encrypted database from a secure messenger to something like C:\Windows\System32\oobe\en-US\audit.mui
can easily escape even an expert’s notice. Windows directories contain thousands of files; finding something unusual (especially if it doesn’t stand out in size) is impossible to do manually.
How are such files found? The average person can be forgiven for not knowing about specialized programs designed to search for such files on suspects’ disks (and disk images). It’s not just about searching by file name; a comprehensive approach is used, analyzing traces (like Windows registry entries) of installed applications, then tracking file paths accessed by those apps.
Another popular method is called carving, or content-based searching. This approach, also known as “signature searching,” has been used in antivirus programs since the beginning of time. Carving can analyze both the contents of files on the disk and the disk itself (or just the occupied areas) at a low level.
Is it worth renaming files? This is another “Elusive Joe” trick that only protects against a very lazy investigator.
Deleting Files
It’s questionable how “naive” it is these days to try to hide evidence by deleting files. Files deleted from regular hard drives are usually easy to recover using signature searching: the disk is scanned block by block (focusing on fragments not occupied by existing files or other file system structures), and each block is analyzed for certain criteria (like being a file header or part of a text file). This scanning is fully automated, and the chances of successfully (at least partially) recovering deleted files are quite high.
Forensic software (here: Belkasoft Evidence Center) can recover deleted data, such as Skype chats, using deep analysis of SQLite databases.
Deleting files seems like a classic “naive” attempt to hide evidence. But not when files are deleted from SSDs.
Here’s a bit more detail on how deletion (and subsequent reading) works on SSDs. You’ve probably heard of “garbage collection” and the TRIM function, which help modern SSDs maintain high write (and especially rewrite) performance. The TRIM command is sent by the operating system; it tells the SSD controller that certain data blocks at certain (not really physical) addresses are freed and no longer used.
The controller’s job is now to erase those blocks, preparing them for quick new writes. But erasing data is slow and happens in the background when the disk isn’t busy. If a write command comes right after TRIM for the same “physical” block, the controller instantly swaps in an empty block by modifying the mapping table. The block meant for erasure gets a new “physical” address or is moved to a non-addressable reserve pool.
Here’s a tricky question: if the controller hasn’t physically erased data from TRIMmed blocks, can signature searching find anything in the free areas of an SSD?
The correct answer: in most cases, when trying to read data from a block that received a TRIM command, the controller will return either zeros or other data unrelated to the block’s real contents. This is due to how modern SSD protocols define controller behavior after a TRIM command. There are three possible values: Undefined (returns real block contents; rare in modern SSDs), DRAT (Determined Read After Trim, or fixed data after TRIM; common in consumer models), and DZAT (Determined Zeroes After Trim, or always return zeros after TRIM; often found in RAID, NAS, and server models).
So, in the vast majority of cases, the controller returns data unrelated to the actual contents. Recovering deleted files from an SSD is usually impossible, even seconds after deletion.
Storing Data in the Cloud
Data in the cloud? You might think there are no criminals that dumb left, but you’d be wrong. Users regularly forget to disable iCloud Photo Library, OneDrive or Google Drive sync, or even more exotic types of sync—like the setting (which, by the way, doesn’t exist in iOS; maybe that’s why they forget?) that sends iPhone call info (both phone and FaceTime) straight to Apple’s servers. I’ve already mentioned the “forgotten” Continuity mode and BlackBerry messenger examples.
There’s not much to add here, except that companies hand over cloud data to police with little resistance.
Using External Drives
Using encrypted flash drives to store information related to illegal activity seems like a genius idea to criminals. You don’t need to delete anything—just pull the flash drive out of the computer, and no one will ever get access (if the protection is strong). That’s how naive criminals think.
Why “naive”? Most ordinary users have no idea about the “traces” left after almost any interaction with USB devices. For example, there was a case involving child pornography distribution. The criminals used only external drives (regular flash drives); nothing was stored on the disks.
They overlooked two things. First: information about USB device connections is stored in the Windows registry and, unless deleted, stays there for a very long time. Second: if you use Windows Explorer to view images, it automatically creates (and saves!) thumbnail previews, usually at %LocalAppData%\Microsoft\Windows\Explorer\
. By analyzing these thumbnails and matching USB device IDs with confiscated drives, investigators proved the defendants’ involvement.
What about encryption? That’s not foolproof either. First, there are specialized apps that can create a memory dump and extract cryptographic keys used to access encrypted volumes (like the popular BitLocker To Go among naive criminals). One such program is Elcomsoft Forensic Disk Decryptor, which can analyze a dump automatically. You can create a memory image using the free Belkasoft RAM Capturer.
Belkasoft RAM Capturer
Second, it’s no secret that many crypto-containers automatically deposit encryption keys in the cloud. If you enable FileVault 2 on a Mac, Apple will notify you several times that recovery is possible via iCloud. But when encrypting a volume with BitLocker Device Protection, Microsoft quietly creates a deposited key in your Microsoft Account. These keys are available directly on your account page.
How can someone access your account? If your computer uses Microsoft Account login (not a local Windows account), an offline brute-force attack can recover the password, which—surprise!—will match your online Microsoft Account password.
In Conclusion: Nested Crypto-Containers with Hardware Keys
It seems like unbreakable protection. A tech-savvy user might chuckle, confident their data is now completely safe.
Theoretically—yes. In practice… there are legal nuances. Here’s a vivid example:
A man accused of downloading and storing child pornography has been in an American prison for over two years. The official charge: refusing to provide passwords to encrypted external storage (NAS), where the court believes the illegal material is stored.
Whether it’s actually there is unknown; no such content was found. But the charge is serious, and in such cases, things like the presumption of innocence and the right not to self-incriminate can be ignored. So the accused sits in jail and will remain there until he reveals the passwords or dies of old age or other causes.
Not long ago, human rights activists filed an appeal, noting that the maximum sentence for refusing to cooperate with investigators is 18 months. The appeal was rejected by the court, even though the judge called the lawyer’s arguments “interesting and multifaceted.” With a serious enough charge, judges will overlook anything, including written law.