Zoom embeds deepfake detection, 1 in 4 Americans report deepfake voice calls, inner speech BCI decodes a 125,000-word vocabulary, and a biosignal microphone bridges security and generative pillars.

Verification

Zoom embeds real-time deepfake detection via Pindrop integration

Zoom announced on 12 March that Pindrop Pulse, Passport, and Protect are now natively integrated into Zoom Contact Center. The stack analyses call audio, device intelligence, voice patterns, and network signals to detect synthetic voice fraud in real time. Aimed at financial services, healthcare, insurance, and government, this moves deepfake detection from a bolt-on to an embedded layer of the collaboration platform itself.

Source: GlobeNewswire

Verification

Pindrop launches agentic AI for fraud case investigation

On 17 March, Pindrop unveiled "Fraud Assist," an agentic AI tool that generates real-time call summaries, risk insights, and case documentation for fraud analysts. Early results from First National Bank of Omaha showed a 50% improvement in case disposition accuracy; beta customers reported up to 70% gains in analyst efficiency. Context: Pindrop's own data shows AI-driven fraud surged 1,210% in 2025.

Source: GlobeNewswire

Verification

1 in 4 Americans report receiving a deepfake voice call

A March 2026 "State of the Call" report found that 25% of Americans say they have received a deepfake voice call in the past 12 months, with another 24% unsure if they could tell the difference. Voice cloning has crossed what researchers call the "indistinguishable threshold" — a few seconds of audio now suffice to generate a convincing clone with natural intonation, pauses, and breathing. Deepfake fraud losses are forecast to reach $40 billion by 2027.

Source: BusinessWire

Verification

Ofcom global titles ban takes effect 22 April

Ofcom's ban on leasing "global titles" — signalling identifiers that criminals exploit to intercept calls, divert SMS-based 2FA codes, and spoof caller identity — reaches its final compliance deadline on 22 April 2026. This is the first regulatory action of its kind worldwide and closes a long-standing infrastructure-level vulnerability in mobile voice authentication.

Source: Ofcom

Verification

Reddit weighs biometric proof of personhood for bot detection

Reddit introduced "[App]" labels for automated accounts on 31 March and is evaluating passkeys, biometric proof-of-personhood services (including World ID), and government ID as privacy-preserving verification for flagged accounts. The platform removes roughly 100,000 bot accounts daily. CEO Steve Huffman emphasised the goal is to know if a user is human, not who they are.

Source: Biometric Update

Diagnostics

Canary Speech showcases vocal biomarker platform at HIMSS26 with Microsoft

At HIMSS26 (9-12 March), Canary Speech demonstrated its clinically validated voice analytics platform inside the Microsoft booth, showing how it extends Dragon Copilot for ambient clinical listening. The company's tools can screen for cognitive decline, depression, Parkinson's, and other conditions from a 40-second voice sample. In the first half of 2026, the platform is expanding to ADHD and autism screening in pediatric settings.

Source: Canary Speech

Diagnostics

Frontiers publishes review calling for master protocols in vocal biomarker research

A narrative review in Frontiers in Digital Health assessed 21 studies and found no established master protocol for vocal biomarker development, identifying this as the primary barrier to clinical translation. The authors propose standardised guidelines for data acquisition, preprocessing, feature extraction, and validation. Separately, the international VOCAL initiative has published consensus-based definitions spanning cardio-respiratory acoustic, voice, speech/articulatory, and cognitive/language biomarker categories.

Source: Frontiers in Digital Health

Diagnostics

PMC review maps ethical, legal, and social implications of voice health data

A scoping review published in PMC examined the ethical, legal, and social implications of using voice and speech data in healthcare. As vocal biomarker tools move toward clinical deployment, the paper highlights unresolved questions around consent, data ownership, algorithmic bias, and the dual-use risk of health-sensitive voice data.

Source: PMC

Verification · Diagnostics

IEEE Spectrum profiles OriginStory's biosignal-sensing microphone

IEEE Spectrum covered OriginStory, an ASU spinout and FTC AI Voice Cloning Challenge winner, whose custom microphone simultaneously records acoustic speech and physiological biosignals — heartbeat, lung movement, vocal-cord vibration, lip and jaw motion. By validating that acoustic and biosignal streams share the same origin, the device provides hardware-level proof of human speech. The approach is described as more robust than software watermarking alone because the biosignals are inherently difficult to synthesise.

Source: IEEE Spectrum

Generative

Cell publishes inner speech BCI decoding with 125,000-word vocabulary

A study in Cell (Stanford/multi-site) demonstrated that inner speech — silent, imagined sentences — can be decoded from motor cortex recordings in real time with up to 74% accuracy across a 125,000-word vocabulary. Four participants with ALS or stroke-related speech loss were tested. The team also demonstrated a password-gated privacy mechanism (>98% keyword accuracy) that prevents the BCI from decoding inner speech unless explicitly unlocked by the user.

Source: Cell

Generative

UC Berkeley/UCSF streaming brain-to-voice neuroprosthesis achieves near-real-time latency

Researchers at UC Berkeley and UCSF published a streaming method that synthesises brain signals into audible speech with first-sound latency under 1 second — down from ~8 seconds per sentence in prior systems. The system converts motor cortex signals directly into voice output with tone, pacing, and melody, creating a near-conversational experience for people with severe paralysis.

Source: Berkeley Engineering

Diagnostics · Generative

Systematic review validates sonified biofeedback for gait and balance rehabilitation

A review in the Journal of NeuroEngineering and Rehabilitation examined 49 studies on sonified biofeedback — real-time translation of biomechanical signals into non-verbal sound — for balance and gait training. 47 of 49 studies reported at least one statistically significant positive outcome. The review highlights neurological auditory-motor coupling as a mechanism, suggesting sonification may leverage neuroplasticity pathways distinct from visual feedback.

Source: Journal of NeuroEngineering and Rehabilitation