I don't know about this freedom of speech thing. Especially when it comes to companies.
Employees can often seem like mouthpieces for the corporate cause, carefully sidestepping inconveniences such as truth.
I was forced to slump on my chaise-longue, however, on hearing the views of Microsoft's Kate Crawford.
"Who's a boy and who's a girl?" she offered in somewhat mechanical tones.
And then: "It feels so good to look the same."
This was followed by the somewhat portentous words: "Machines can do the work, so that people have time to think."
"Machines can do the work to make reality of imagination," was another far-reaching phrase that quickly emerged.
This was all, I should add at this point, when Crawford was a member of the electronica band B(if)Tek.
A look at just one of their videos, which I missed at the turn of the century, reveals the presence of a seemingly advanced HAL figure. A look at another offered the hopeful words about machines and the coda: "Programming for pleasure."
This is entirely relevant, as I came to Crawford's pleasingly forthright views via an interview she gave to The Guardian.
No, this isn't quite a music column, though I could sing it to you for a fee.
Crawford, you see, is a senior principal researcher at Microsoft and the interview revealed some of her conclusions about whether machines really are doing the work so that we have time to think. And about the sort of work machines are actually doing.
Crawford talked about the vast human and environmental costs of giving ourselves over to an artificially intelligent world.
I found myself spontaneously cheering at this: "AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous."
It's something easily forgotten, as we get so much entertainment from asking Siri what the weather is so that we don't have to look outside ourselves.
Here's the part, though, that incited a certain existential giddiness in the remaining slivers of my soul. Crawford explained something that human beings have known for so long, yet allowed machines to use as the basis for vital, painful, and utterly threatening technology.
"The idea that you can see from somebody's face what they are feeling is deeply flawed," she said. "I don't think that's possible."
We know this every time we comment on someone else's holiday snaps.
"You look so happy in that one."
"You're kidding. I'd already decided to dump him."
Yet here we are selling AI that claims to identify "perceived emotion recognition that detects a range of facial expressions like happiness, contempt, neutrality, and fear; and recognition and grouping of similar faces in images."
Yes, that comes straight from Microsoft's own description of its Face API. Which it describes as delivering "low-friction, state-of-the-art facial recognition."
Crawford says that before she publishes her work -- the latest is a book called Atlas of AI -- she doesn't have to get Microsoft's approval.
Which might make some wonder whether the company listens to her enough. Why, indeed, is Microsoft selling AI which its senior principal researcher says is dangerously flawed?
Crawford says this sort of emotion-identification software is "one of the most urgently needed domains for regulation." She says it's based on thinking from the 70s -- why would you do that? Don't you know what happened then? -- that there are only six basic emotions our faces betray.
Look at me. I said, look at me. What am I thinking and feeling? I bet you don't get it right.
As is often the case, I took the naive approach and asked Microsoft (twice) why it continues to sell such software. I will update, should I ever hear.
Naturally, there's always the fear that the real answer is: "Because other people are doing it and we don't want to be left behind. Oh, and there's a lot of money in it."
Of course, I found myself riveted that Crawford was allowed to express her views quite so openly. Why, a simple Google search reveals the tendency of other tech companies to worry so much about their AI researchers' actual thoughts that they fire them. (I suggest the search term: "Google fires AI researcher."
I want to believe, though, that those who are thinking about the true consequences of what the tech world is creating are being heeded.
After all, it's the machines doing the work that give the researchers time to think.
It's just that the machines may be doing very dubious work indeed.