SD-301b · Module 2

Reading the Room in Real Time

3 min read

Signals. The room is full of them. The attendee who leans forward when you mention cost reduction — that is interest. The one who checks their phone when you show the architecture slide — that is disengagement. The executive who makes eye contact with another executive after you state your hypothesis — that is alignment being tested. Reading these signals in real time determines whether you adjust or plow through. The rep who adjusts closes. The rep who follows the script presents.

Virtual meetings strip out 60% of the signal. No body language below the shoulders. No side conversations. No spatial dynamics. What remains: facial expressions, voice tone, chat activity, camera status, and question patterns. A stakeholder who turns off their camera mid-meeting has mentally left. A stakeholder who unmutes to ask a detailed question has leaned in. AI meeting assistants can track engagement proxies — speaking time distribution, question frequency, response latency — and surface them in real time. The data is partial. Partial data still beats no data.