Senator Hassan escalates congressional pressure on voice cloning companies, Science Corp implants its biohybrid brain sensor in a first human patient, and Modulate's new Velma API makes continuous deepfake voice detection economically viable at scale.
Senator Hassan presses voice cloning companies on scam prevention as AI Fraud Accountability Act advances
Senator Maggie Hassan (D-NH), ranking member of the Joint Economic Committee, sent letters on April 16 to ElevenLabs, LOVO, Speechify, and VEED demanding detailed answers about how each company monitors for scam-related uses, verifies consent before cloning a voice, detects attempts to imitate public figures and minors, watermarks AI-generated audio, and reports bad actors to law enforcement. Hassan also asked for metrics on how many users have been flagged for policy violations, including what share were caught before generating a nonconsensual voice.
The letters arrive as Senate bill S.3982, the AI Fraud Accountability Act of 2026, introduced by Senators Tim Sheehy (R) and Lisa Blunt Rochester (D), moves through the Commerce Committee. The bill would amend the Communications Act of 1934 to create a federal criminal prohibition on using a "digital impersonation" in interstate or foreign communications with intent to defraud. This matters because the voice cloning industry has operated without a federal enforcement framework; consent verification and provenance tracking may soon become legal requirements, not optional safeguards.
Source: Axios
Science Corp places biohybrid brain sensor in first human patient
Science Corporation, the startup from former Neuralink president Max Hodak, announced this week that its biohybrid cortical sensor has been placed in a human brain for the first time. The device merges synthetic electronics with living biological tissue: a thin-film electrode array seeded with living neurons is placed on the cortical surface, where those neurons form synaptic connections with the patient's own brain tissue, creating a biological bridge between the device and underlying neural circuits. The pea-sized chip carries 520 recording electrodes.
The device successfully recorded neural signals through its living neuronal layer, validating years of preclinical work. Dr. Murat Günel, chair of neurosurgery at Yale School of Medicine, joined as scientific adviser to lead future US trials. This matters because biohybrid interfaces could eventually achieve higher-fidelity, longer-lasting neural recordings than purely electronic implants — a critical requirement for real-time speech synthesis from brain activity.
Source: TechCrunch
Modulate's Velma Deepfake Detect makes continuous voice fraud screening economically viable
Modulate launched Velma Deepfake Detect, a synthetic voice detection API that ranks first on the Hugging Face Deepfake Speech leaderboard while operating at 578× lower cost than the next-best model. Priced at $0.25 per hour of audio, Velma is over 100× cheaper than competing solutions, making it feasible for the first time to monitor entire voice interactions rather than sampling short segments. Built on Modulate's Ensemble Listening Model (ELM) architecture, the system combines short vocal tone analysis with longer-range rhythm and pronunciation patterns for both real-time streaming and batch processing.
This matters because deepfake detection has historically been too expensive for continuous deployment, forcing organisations into sampling strategies that leave coverage gaps. At this price point, contact centres, identity verification pipelines, and voice agent platforms can run detection across every second of every call — a shift from spot-checking to full-coverage fraud prevention.
Source: Morningstar / AccessWire
Frontiers editorial codifies five tenets for responsible vocal biomarker deployment
An editorial published April 10 in Frontiers in Digital Health, closing a multi-author research topic on vocal biomarkers and voice AI in healthcare, argues that voice has crossed from speculation to "structured clinical inquiry" and proposes five operational tenets for the field: (1) adopt master protocols aligned with verification, analytical, and clinical validation stages; (2) treat usability metrics as primary outcomes; (3) require public model and data cards describing linguistic and demographic coverage; (4) integrate DEIA training into every technical curriculum; and (5) pursue early regulatory alignment rather than retrospective compliance.
The editorial synthesises contributions from clinicians, data scientists, ethicists, and engineers, and arrives at a significant framing: voice can now function as a regulated physiological signal, but only if collection and interpretation are governed by standardisation and trust. For teams building vocal biomarker products, these tenets sketch the likely shape of future regulatory expectations.
Source: Frontiers in Digital Health
AWS to sunset Amazon Connect Voice ID on May 20, pointing customers to Pindrop
Amazon Web Services confirmed that Amazon Connect Voice ID will reach end of support on May 20, 2026. After that date, customers will lose access to Voice ID features on the Connect console, admin website, and Contact Control Panel. AWS is directing customers toward third-party alternatives available on the AWS Marketplace, specifically naming Pindrop as a migration path. New customer sign-ups were already blocked as of May 2025.
The exit of a major cloud platform from first-party voice biometrics is a market signal worth reading. It suggests that general-purpose cloud providers see voice authentication as a specialty discipline — one better served by dedicated vendors with deeper anti-spoofing stacks — rather than a commodity feature. For enterprises currently using Voice ID, the migration window is now one month.
Source: AWS Documentation
Two developments this month mark an inflection point in the economics of voice fraud defence. Modulate's Velma API drops detection cost to $0.25/hour, and AWS's exit from first-party Voice ID pushes enterprises toward specialist vendors with integrated deepfake countermeasures. Meanwhile, legislators are moving to require consent verification and provenance tracking at the point of voice synthesis. The emerging picture is a market converging on continuous, full-pipeline voice integrity monitoring — detection embedded at every stage from call ingress to authentication — rather than the periodic sampling that defined the previous generation of anti-fraud tools. Enterprises planning voice security architectures should design for always-on detection as the baseline, not the premium tier.