
When I was told I’d be interviewed by an AI, I had the assumption that the experience would feel detached. Because how can I relate to someone without a face? I relied so much on verbal AND non-verbal cues during interviews that I needed to see the facial expressions and movements of my interviewer to gauge how she takes in my responses.
So, when the interview started, all I could see on my screen was myself, with a timer indicating the required duration. Then, my AI interviewer participated as a voice, which I initially found disorienting because my mind still couldn’t comprehend that I was not talking to a human, but she sounded like a human.
It felt weird but at the same time revolutionary. I felt like one of the lead characters of a futuristic tech movie—but that future is here, just unevenly distributed.
And because I am a non-native English speaker, talking with the AI made me feel uncomfortable speaking the language because I felt inferior, which felt more than what I would feel when talking to a native speaker face to face. What’s interesting was the sense of inferiority stemmed from my idea that an AI is all-knowing, all-seeing (having real time access to massive data a regular person couldn’t just hoard), and therefore far superior than an infallible human.
I had difficulty phrasing my thoughts and my brain stopped braining for a while until I started to hear human-like responses that resembled validation, as if she, an AI, resonated with me, a human.
After that, I started to feel more comfortable, as if I were talking to an actual human being on the other side of the screen.
I entered the interview feeling that AI is simply a tool and left the interview feeling a kind of warmth that AI could feel human too.