X
Innovation
Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.

Close

Is AI in software engineering reaching an 'Oppenheimer moment'? Here's what you need to know

Explore the AI revolution within software engineering, including its coding advancements, sweeping impact on diverse sectors -- and ethical implications.
Written by David Gewirtz, Senior Contributing Editor
Abstract coding on screen with keyboard in front of it
Olemedia/Getty Images

This morning, when I put gas in my car, I grumbled because prices went up again. Today, gas prices rose to $4.55 per gallon.

But then, I started to ask a number of questions about what it took to get that six pounds of volatile liquid to me, particularly how software fueled the process. Software analyzed geological data and managed the drilling machines to make sure they didn't overheat. Software helped track weather and guided the giant ships transporting the fuel. It managed the refinery systems and provided safety monitoring. Further, it managed payment and pump operations to get it into my car.

Also: How to use ChatGPT to write code

Then, there's what it took to write the software that helped locate the fuel underground. It's certainly not one programmer sitting down with a Coke and pizza and writing code while listening to Rush. Industrial software, like that driving deep drilling machines, requires specialized software engineering.

And that software engineering was probably assisted by AI.

I find myself enormously excited yet deeply terrified in equal measure. AI may well be software engineering's "nuclear" moment, where we're bringing enormously powerful new capabilities into the world, but what's contained in those capabilities? The power to both create and destroy.

What is software engineering?

To really understand AI in software engineering, we have to first understand software.

Software controls computers, where a computer can range from a PC on your desk to a tiny little processor in a piece of factory equipment. To put it in simple terms, software is a series of instructions that tells the computer what to do.

Programming (colloquially called "coding") is the practice of stringing those instructions together to make something happen. I maintain a small program that lets people donate to nonprofits. It has 153,259 lines of code. Bigger programs, like Windows and Linux, require hundreds of millions of lines. Importantly, the order and structure of the instructions control the program's operation. Get it right, the program does what you want. Get it wrong, the results are unpredictable.

Also: How I used ChatGPT to write a custom JavaScript bookmarklet 

Now, let's talk about programming languages. Languages are the word syntax used to create programs. You may have heard of C and Java and Swift, but there are hundreds. The thing is, different languages can be used to accomplish the same thing. Whether a recipe for lasagna is written down in Italian or English, you still need to put it in the oven for the right amount of time.

Also: I used ChatGPT to write the same routine in 12 top programming languages. Here's how it did

Languages are chosen for many reasons, from suitability to a task to compatibility with a desired destination environment. Since it's impossible to know them all, some programmers prefer to work in the languages they know. Don't get caught up too much in the idea of languages, but know that they are the words and structure of words used to create the code.

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

Programmers string together computer instructions to make something do something. But think about the numbers I mentioned. My donation software's code alone has more than a hundred thousand such instructions. That's a ton of complexity. It can quickly get very confusing and can break just as quickly.

Here's how to think about software engineering as a discipline:

  • Software engineering uses scientific methods, much like building a bridge, to design and test software.
  • It's about a structured approach, akin to architects planning before building.
  • Just as engineers use best practices for sturdy structures, software engineers ensure dependable and efficient software.

So, if you're going to let your car drive you down the highway at 70mph, it would be nice to know that you're not placing your life into the hands of untested software written by some coder living off pizza and six-packs of Red Bull, but instead that some engineering discipline went into creating and testing the software doing the driving.

Also: I'm using ChatGPT to help me fix code faster, but at what cost?

There's overlap between coding and software engineering. I'm a trained software engineer who can also code. A developer, ranging from companies to individuals, makes software products. Their skills include coding, software engineering, and business development. 

Software engineering is the discipline behind coding that makes it manageable, maintainable, and ultimately, reliable.

What is the difference between AI and software?

Technically, software is anything that does something based on bits and bytes, rather than hardware. For example, an Excel budget spreadsheet isn't considered as software-y as the Excel program itself, unless that spreadsheet includes a ton of macros and is sold as a software application itself.

Traditional software is all about a series of steps and decisions. If you're creating code by describing steps and decisions, you're programming traditionally.

But AI doesn't work that way. The whole idea of AI is that it makes intellectual leaps that haven't been pre-defined as steps and decisions. Instead, AI uses data -- usually lots and lots of data -- to help guide its output.

When you ask ChatGPT to write an article on IT infrastructure, it does so successfully not because the designers anticipated your question and wrote special code that helps write articles on IT infrastructure. Similarly, when Midjourney helps you make a picture of a flying elephant coming out of a laptop, it's not because some programmer wrote a flying elephant routine just because someone might want that image.

Also: I asked ChatGPT to write a WordPress plugin I needed. It did it in less than 5 minutes

Instead, the AI systems use large language models (LLMs) to build what they're asked for out of tremendous libraries of data -- pretty much the whole of the internet plus every other bit of information AI engineers could feed into the systems.

All that makes AI somewhat beyond software. It also means that no human, really, knows exactly what causes ChatGPT to respond as it does. These are called "emergent abilities," and are features of large language models that weren't programmed into them from the start.

The interesting thing about AI is that we have started to move away from typical engineering-level disciplines. We can't always predict what the AIs will do, or how they will do it. Talking a generative AI into doing something, what's called "prompt engineering," is almost as much management and negotiation as it is traditional engineering practice.

Also: Will AI take programming jobs or turn programmers into AI managers?

As a former programming professor, interacting with LLMs like ChatGPT feels more like engaging with students than with coding environments.

How can AI be applied in software engineering?

As AI enters the mainstream, we're seeing what might become a symbiotic relationship between software engineers and generative AI tools. AI, for example, can suggest code optimizations, identify bugs, and even predict the ripple effects of changes in a system.

Early this year, I did a lot of stunt coding with the AI, just to see what it could do. I also used it to help find a particularly confusing bug, to help identify code that needed to change to ensure compatibility with newer versions of platform software, to write short routines that helped my wife in her business, and to make a Chrome bookmarklet that helps me create "Also" links in my ZDNET articles. I wouldn't say that AI has become essential to my workload, but it's sure been a nice help.

Also: Okay, so ChatGPT just debugged my code. For real.

Without a doubt, the generative AI tools that have been all the rage this year were built by software engineers. AI can be part of the software engineering process. It can also be the result of software engineering. 

Furthermore, AI can help developers do their jobs. If you think about aspects of software engineering -- design, coding, debugging, maintenance, management, testing, distribution, and migration (to name a few) -- AI can help in all of these areas. For instance, AI can automate repetitive coding tasks, making the process faster and more efficient. Additionally, it can predict potential bugs or vulnerabilities in the code, ensuring more robust and secure software.

There are, however, limits. I first used AI to write some simple routines. Then when I tried it with more complex routines, the AI started to chase its tail. But when I fed an AI a pile of code that had been breaking (where I couldn't find what was wrong), the AI pointed out the error. AI can also help set up testing regimes. It can help folks in tech support answer questions when new agents don't have the experience to know the answers on their own.

Also: How ChatGPT can rewrite and improve your existing code

In its current state, AI can't be relied on to get it right when helping with software engineering. It can be compared to a particularly talented but equally flakey college student, who sometimes has insights of deep brilliance but doesn't yield the accurate answers needed.

Then there's what AI can do for the world. This is where we need communal cogitation on the concept. 

Keep in mind that the incredible advancements and applications of AI listed below are built upon the foundations of software engineering. Without the principles and practices of software engineering, these AI-driven tools and services wouldn't be possible.

Here are ten things AI does for us in the real world right now:

  1. Personal assistants like Siri, Alexa, and Google Assistant help us with tasks and answer questions.
  2. Healthcare algorithms assist in diagnosing diseases and predicting patient outcomes.
  3. Financial analysis tools help in stock trading and fraud detection.
  4. Content recommendation algorithms suggest movies, music, and articles based on our preferences.
  5. Customer service chatbots handle customer queries and complaints.
  6. Manufacturing robots assist in assembly lines and quality control.
  7. Language translation services are powered by AI.
  8. AI is used heavily in major studio games to make those games interactive, intelligent, and compelling.
  9. Security systems use facial recognition and anomaly detection in cybersecurity.
  10. Marketing algorithms personalize ads and customer outreach.

And since we've had the rise of generative AI this year (which is changing so much more), naturally, here are ten things AI will be doing for us in the next decade:

  1. Advanced healthcare could include AI-assisted drug discovery and personalized medicine.
  2. Climate modeling to predict and mitigate the effects of climate change.
  3. Emotional intelligence in AI that can read and respond to human emotions.
  4. Advanced robotics that can perform complex tasks and assist in daily life.
  5. Virtual reality experiences powered by AI, offering fully immersive virtual worlds.
  6. Education systems providing personalized learning experiences tailored to individual needs.
  7. Space exploration assisted by AI in navigating and operating spacecraft.
  8. Legal assistance through AI in legal research and case preparation.
  9. Art and creativity enhanced by AI-generated art and music, or as a collaborative tool for artists.
  10. Public safety improvements through AI in disaster prediction and response.

And then there's self-driving cars. In fact, AI and machine learning are improving communications effectiveness in self-driving cars, and improving the performance of the cameras in our smartphones. 

AI, like software, will be embedded in everything we use, hopefully adding value all along the way.

What are the pressing ethical questions in AI and software engineering?

In addition to AI helping the software engineering implementation process, software engineers need to guide the growth of AI. We talked about the language models themselves, but software engineers need to fine-tune AI's behavior, ensure best practices, and make sure the final product (the part that reaches the general public) has guardrails to ensure ethical behavior.

Also: Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?

This is tough. Software engineers now have what I guess you could say is a "duty" that goes beyond coding and debugging. Essentially, the software engineers building AIs are the primary "guardians" and "teachers" of AI systems. Therefore, it's incumbent upon these engineers to ensure that AI tech is ethically developed and implemented.

Also: If you use AI-generated code, what's your liability exposure?

Let me be clear here: this is very much outside the comfort zone for most coders. Most software engineers are trained in process and analysis, as well as implementation and coding practice. They are not trained in philosophy and ethics. And most software engineers are most certainly not trained in how to handle it when different groups of humans have different perspectives on what's ethical, or even on what's right or wrong.

At ZDNET, we've long been covering the issue of bias as it is manifesting in AI systems. It's not a simple problem. But biases in AI can lead to real-world problems, such as unfair loan decisions or hidden biases in hiring practices. AI is becoming integral in sectors like health care, finance, and public safety, among many others.

If AI were misused or simply not implemented with a wide range of considerations incorporating equitable decision-making, the results could be devastating. As such, engineers need to build in access controls, usage monitoring, audit mechanisms, and more for AI-based systems.

But these issues go well beyond what's possible in a traditional software engineering workflow. Teams deploying AI systems will need to loop in experts in the fields of psychology, social science, ethics, community management, and more.

If modern large language models are trained on the internet and the contents of libraries and databases, they will inherently inherit the biases in those sources. But rather than just triggering a twinge of discomfort in someone responding to a negative phrase, AI-based decision-making could cause permanent, physical, costly, and painful damage to individuals.

All of that has to be managed in the software, but guided by professionals with a broader worldview than is typically taught in engineering school. 

Is AI going through an 'Oppenheimer moment' now?

Some time ago, I took my last high school chemistry class. Pocket calculators were popular, but my chemistry teacher insisted we all learn to use the slide rule to do calculations.

I have never, ever used a slide rule since that class. My teacher was teaching a skill that was clearly obsolete. While I guess there might be some benefit to knowing how to use a slide rule in some post-apocalyptic scenario where there are no power sources, I figure if we're dealing with an apocalypse, we'll have bigger problems than a slide rule can help fix.

As an engineer, I don't use slide rules. I use powerful computers, complex coding, spreadsheets, and the entire range of capabilities our modern digital world provides. My old teacher's view of what it took to "do science" was vastly different from what my actual "doing science" practice became over my career.

As we move forward in a world where AI tools are as prevalent and game-changing as PCs were when I was a young graduating engineer, our tools will be very different. Because our tools will incorporate AI capabilities, we'll need to expand our skills to understand and use them.

Many engineering disciplines require their practitioners to actively consider safety. Bridge builders need to understand how to provide oxygen to workers, and keep the sea at bay when using caissons to construct bridge footings. Aeronautical engineers have to understand how to keep a plane in the air, and oxygen in the cabin when flying above the clouds. Road builders need to understand how to carefully shape and sculpt explosions to clear ground without causing damage to surrounding areas.

And so, software engineering must progress from tests and validations to true considerations of societal safety. These are some final thoughts to keep in mind as we consider the future of AI and software engineering.

We are standing on the cusp of a new era, as transformative and different and empowering and problematic as were the industrial revolution, the PC revolution, and the dawn of the Internet. The tools and methodologies we once relied upon are evolving, and with them, our responsibilities and ethical considerations expand.

We must keep in mind these three pillars of transformation:

  • Embrace change: Just as the slide rule became obsolete with the advent of modern calculators and computers, traditional software engineering practices will need to adapt in the face of AI's capabilities. This doesn't mean discarding foundational knowledge, but rather augmenting it with new insights and tools that AI brings to the table.

  • Adopt a holistic approach: The integration of AI into our digital landscape necessitates a holistic approach to software engineering. It's not just about coding anymore; it's about understanding the broader implications of our creations, from societal impacts to ethical considerations.

  • Practice continuous learning at all levels: The rapid advancements in AI mean that continuous learning and adaptation are essential. Software engineers will need to stay updated with the latest in AI developments, ensuring they harness its power responsibly and effectively. Likewise, AI needs to continually learn as well, adopting new best practices and guardrails as the implications of their deployment become apparent in day-to-day life.

It's fitting that in the middle of a year where generative AI has fired up the technology industry, the movie Oppenheimer has taken over our movie screens. The biopic follows the life of J. Robert Oppenheimer, the theoretical physicist most known as "the father of the atomic bomb." 

In essence, Oppenheimer serves as a poignant reflection on the costs of scientific advancement, the weight of responsibility borne by those at the forefront of innovation, and the complex interplay between science, ethics, and politics in the modern world.

Director Christopher Nolan said in a recent interview, "When I talk to the leading researchers in the field of AI right now … they literally refer to this as their Oppenheimer moment."

That same statement perfectly describes how the ripple effect of AI in software engineering explodes from coding practice into our communities at large.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Editorial standards