Prior to the last 10 years or so when I've been mostly a full-time advisor, pundit, and columnist, I was a manager. I was a product marketing executive earlier in my career, and a publisher later in my career. I had people directly reporting to me for years.
I managed editors, salespeople, programmers, manufacturing teams, and other executives.
You want to know one of the best things about my encore career? No direct reports. I don't have to manage anyone.
People who haven't been managers think bosses get to spend their time dumping work on underlings and just bossing people around. Managers know that the reality is they spend much of their time simply trying to get the people who work for them to execute their job duties as instructed.
Some of that falls on the manager, who may or may not give clear instructions. But an equal amount of that challenge falls on the direct reports who misinterpret instructions, passive aggressively follow directions to the letter (this was my karma payback, because I did this to my bosses), or simply need to be negotiated with to do what needs doing.
It's part of why I like programming so much. With programming, the computer will also do exactly what you tell it to do. Of course, the precision with which a program follows instructions often leads to bugs, especially on the first try. But that's OK, because whatever it does wrong is somewhere there, right in the code.
It might be a challenge to come up with the right algorithm or to translate the algorithm and data structures in your head into working code, but code is code. It's consistent and reasonably predictable.
Then there's artificial intelligence (AI). Giving instructions to a generative AI like ChatGPT is much more like managing a programmer than it is like programming. Everything is subject to interpretation and negotiation. Yes, you can get results, and sometimes you can get results you couldn't have gotten without a lot of coding, but there's still some degree of haggling, negotiating, reframing requests, and trying after to get it right.
You can give an AI a prompt twice and it will return two different results. Unless your code has some sort of randomization function or serious bug, you can run your code twice and it will return the same exact results.
Will AI take programming jobs?
I've been giving this question a lot of thought, especially in light of some prompt writing I did this weekend while working on an article on advanced prompt writing. In that article, I tried to get ChatGPT to solve a very simple problem, and it wound up taking me hours and more than 20 prompt attempts to get it to work reliably. The prompt was:
Word similar to devolve that begins with a B
ChatGPT kept giving me answers that began with a "D", seeming fully confident in its answers. When I pointed out that the words it returned did not begin with a "B", it apologized and made the same mistake. Over and over and over again. It felt very much like I was talking to a particularly stubborn employee, trying to get them to see what I wanted them to do.
There was a time when I managed a few salespeople who sold over the phone. They were asked to call a fairly warm prospect list and pitch our services. I gave them an exact description of how to pitch our services, but we had one salesperson who just refused to stick to the script.
As such, some of the people she called were turned into hot leads…until we met with the prospects, only to find out that they had the wrong idea about the services we offered. She liked her description better because it made getting appointments easier.
But it wasn't about making appointments. It was about making sales. She wasn't even compensated on making appointments, but that didn't matter. She liked her way better.
ChatGPT is like that. By the time I spent a few hours trying to get it to return a word beginning with a B, I reached the stage where I wanted to yell at it, "Well, what would it take to convince you that the word DEVOLVE begins with a D?"
I wasn't coding. I was negotiating. I spent a good part of my Sunday haggling with a robot, all the while thinking, "So this is progress?"
I've always been fascinated by AI, and we're at the point where the technology is close to what I dreamed it would become. I've worked with AI and the implications of AI as far back as my thesis work in college. And yet, after a few hours, I felt like banging my head against a wall. I wanted to scream at the top of my lungs and tear my hair out.
So, it was a lot like managing some of the direct reports I've had over the years -- and, if I'm honest, a lot like how my bosses felt managing me when I was younger.
I did eventually come up with a reliable prompt, and the article describes why it works. But it became very clear to me that while it looks like AIs might take low-level programming jobs, the fact that the AIs work so much like employees might provide some human-worker protection.
The following table shows that there are some tasks where doing coding is easier, and other tasks where using an AI is easier. As you can see, the combination of the two is particularly interesting, but using an AI certainly doesn't remove the requirement for human skill and expertise.
You'll need to find a large dataset and use a specific API to retrieve individual data items
Just describe what you need and the AI will find it somewhere. It's easy to do.
Accuracy of data
If the data set is accurate and your code runs correctly, the data will be accurate.
There is no provenance to the data you retrieve. It could even be completely made up by the AI.
You need to be familiar with how to code and how to design an algorithm, as well as various APIs and language interactions.
If you can describe it, you can generally make it happen by simply telling the AI what you want.
Your code will do exactly what you tell it, including make errors if you haven't fully debugged it.
The AI will roughly interpret what you asked for and will sometimes stubbornly do whatever it wants anyway.
Executing complex instructions and getting reliable results
You need to be an experienced coder with a full grasp of how to construct algorithms and write code.
You need to be an experienced "prompt engineer" with a full grasp of how to specify problems and how they should be solved.
Are skills and training required?
Newbie programmers can do some projects, but real work requires deep understanding of how to get the job done.
Anyone can write simple prompts, but solving complex problems requires deep understanding of how to get the job done.
For programmers, this changes the game in ways beyond crafting elegant code. There are ethical considerations programmers aren't necessarily trained to handle -- or possibly even understand. As AI becomes more prevalent, accountability, transparency, and bias in AI implementations become critical considerations.
The integration of AI into the programming world not only alters the technical aspects of the job but also introduces a new set of ethical dilemmas that programmers, now transitioning into AI managers, will need to learn to navigate.
I have no doubt that AI will transform programming jobs, and take some of the work away from real people. But, at least for the current generation of AI engines, getting anything real done will require some level of expertise, whether that be coding expertise, prompt-writing expertise, or -- more likely -- a mix of both. Plus a healthy dose of patience.