Yes, the use of AI-generated evidence can and will be vigorously contested in a court-martial. AI-generated material, such as a deepfake video, an AI-reconstructed accident scene, or an AI-generated transcript, is a novel form of evidence that is subject to intense scrutiny under the Military Rules of Evidence. The primary avenues for challenge are authentication under MRE 901 and the standards for expert scientific testimony under MRE 702, which follows the federal Daubert standard for reliability.
A military defense attorney would file a motion in limine to exclude the AI-generated evidence before trial. The attorney would first challenge its authentication, arguing that the prosecution cannot prove that the AI-generated item is what they claim it to be, as it was created by a machine, not a human witness. More importantly, the attorney would challenge its scientific reliability under MRE 702. They would demand that the government explain the AI’s algorithm, its known error rate, the data it was trained on, and whether its methodology has been peer-reviewed and accepted in the scientific community.
The military judge would act as a crucial gatekeeper. They would hold a hearing where both sides could present expert testimony on the reliability of the specific AI tool used. Given the current well-documented issues with AI “hallucinations,” algorithmic bias, and the potential for manipulation, a judge would be extremely cautious. It is highly likely that a judge would find that most forms of novel AI-generated evidence do not currently meet the rigorous standards for reliability required for admission in a criminal trial, leading to its exclusion.