Artificial Intelligence (AI). You see it mentioned all over the news headlines. You overhear your coworkers discussing it in the breakroom. Even your family members bring it up at get-togethers.

Much like when the internet first came to be, people are both amazed and uncertain about it. I often hear and see the same questions come up. What is the history of AI? When did it start? What exactly is AI? Is it just ChatGPT? What kinds of AI are there? Will AI take my job from me? Will AI take over the human race? (Definitely no to the last one!)

As someone who works in the technology field, I’d love to answer those exact questions for you and share some of my own thoughts on AI. 

What is Artificial Intelligence?

Let’s answer this question first. Artificial intelligence is the science of making machines think, process, and create like humans. It has become a term applied to applications that can perform tasks a human could do, like analyzing data or replying to customers online. 

The History of Artificial Intelligence 

You might be surprised to learn that AI has existed for a while. 

A graphic timeline showing the history of artificial intelligence. Starting with the 1950s, the Start of AI, with the Turing Test in 1950 and Artificial Intelligence coined in 1956. The 1960s were of Early Research and Optimism, with ELIZA in 1964 and SHRDLU in 1968. The 1970s was the AI Winter, seeing a decline in interest and funding. The 1980s brought Renewed Interest and Expert Systems. The 1990s brought Machine Learning, with IBM's Deep Blue in 1997. The 2000s brought Big Data, giving us more information to train. The 2010s brought us Deep Learning, with AlexNet winning in 2012 and AlphaGo winning in 2016. Most recently, we've seen Generative AI takeover.

1950s

The Start of AI 

The first application of artificial intelligence was the Turing Test. In 1950, Alan Turing tested a machine’s ability to exhibit behavior equal to a human. The test was widely influential and believed to be the start of AI.  

In 1956“Artificial Intelligence” was officially coined by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at the Dartmouth Conference. The conference is seen as the founding event of AI. 

1960s

Early Research and Optimism 

Early AI programs began to develop during this time. Computer scientists and researchers eagerly explored methods to create intelligent machines and programs.

Joseph Weizenbaum created ELIZA, a natural language processing program to explore communication between people and machines. Later, Terry Winograd created  SHRDLU, a program that understood language in a restricted environment. 

1970s

The AI Winter 

Early enthusiasm from the 1950s and 1960s fell due to limited computational power and unrealistic expectations. There was a significant decline in interest and funding for AI, so projects fell by the wayside. You’ll often see this time in history called the “AI Winter.”

1980s

Expert Systems Bring Renewed Interest 

Despite the slowdown, some projects continued, albeit with slow progress. Expert systems, designed to mimic human decision-making abilities, developed and were a turning point in AI. These systems proved that AI could be used and beneficial in businesses and industries. Many commercial fields, such as medicine and finance, began using expert systems. 

1990s

Machine Learning and Real-World Applications 

Here’s where AI started gaining momentum. During this time, we shifted from rule-based systems to Machine Learning. Machine Learning is just that – a machine or program that can learn from data. We see a lot of Machine Learning in today’s applications, like self-driving cars or facial recognition.

Machine Learning developed so well that in 1997, IBM’s Deep Blue became the first computer system to defeat world chess champion Gary Kasparov. This moment showcased AI’s potential for complex problem-solving and ability to think like a person. 

2000s

Big Data Offers AI Advancements

Up until now, AI was limited by the amount and quality of data it could train and test with Machine Learning. In the 2000s, big data came into play, giving AI access to massive amounts of data from various sources. Machine Learning had more information to train on, increasing its capability to learn complex patterns and make accurate predictions. 

Additionally, as advances made in data storage and processing technologies led to the development of more sophisticated Machine Learning algorithms, like Deep Learning. 

2010s

Breakthroughs With Deep Learning 

Deep Learning was a breakthrough in the current modernization of AI. It enabled machines to learn from large datasets and make predictions or decisions based on that. It’s made significant breakthroughs in various fields and can perform such tasks like classifying images.

In 2012,  AlexNet won, no dominated, the ImageNet Large Scale Visual Recognition Challenge. This significant event was the first widely recognized successful application of Deep Learning. 

In 2016 Google’s AlphaGo AI played a game of Go against world champion Lee Se-dol and won. Shocked, Se-dol said AlphaGo played a “nearly perfect game.” Creator DeepMind said the machine studied older games and spent a lot of time playing the game, over and over, each time learning from its mistakes and getting better. A notable moment in history, demonstrating the power of reinforcement learning with AI. 

2020s

Generative AI

Today’s largest and known impact is Generative AI, able to create new things based on previous data. There’s been a widespread adoption of Generative AI, including in writing, music, photography, even video. We’re also beginning to see AI across industries, from healthcare and finance to autonomous vehicles.

Common Forms of AI

Computer Vision
An Asian woman with long brown hair sits at a desk, smiling, using a smartphone and its facial recognition application.
Face recognition is one example of Computer Vision.

Computer Vision is a field of AI that enables machines to interpret and make decisions based on visual data. It involves acquiring, processing, analyzing, and understanding images and data to produce numerical or symbolic information. Common applications include facial recognition, object detection, medical image analysis, and autonomous vehicles. 

Honestly, Computer Vision is my favorite field of AI. I’ve had the opportunity to work on some extremely interesting use cases of Computer Vision. One was using augmented reality lenses (like virtual reality goggles) to train combat medics using Computer Vision. The Computer Vision with augmented reality added a level of realism to the virtual training, which used to be unattainable. While I would love to go into further detail about what the AI looked like, I signed a non-disclosure agreement. You’ll just have to take my word that it was really cool! 

Machine Learning (ML) 
A white worker in a reflective orange jacket stands in front of several computer screens showing graphs and analytics.
Predictive analytics is one example of Machine Learning.

Machine Learning is a subset of AI that allows computers to learn from and make predictions or decisions based on data. It encompasses a variety of techniques such as supervised learning, unsupervised learning, reinforcement learning, and semi-supervised learning. Common applications range from recommendation systems (like Netflix), fraud detection, predictive analytics, and personalization. 

Deep Learning 
An aerial shot of a robot hand, moving a chess piece while playing chess on a wooden game board.
Playing a game of chess online with the computer program is an example of Deep Learning.

Deep Learning is a subset of machine learning. It involves neural networks with many layers (deep neural networks) that can learn from large amounts of data. It enables machines to learn features and representations from raw data automatically. Key components include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs).

Natural Language Processing (NLP) 
A white person is sitting at a wooden desk, holding a smartphone with one hand, and using the other hand to use it. Surrounding the person's hands and phone are several different languages represented, including English, Espanol, Deutsch, and more - showcasing a translation application as an example of Natural Language Processing.
Mobile language translation applications are one example of Natural Language Processing.

Natural Language Processing (NLP) is a branch of AI that focuses on the interaction between computers and humans through natural language. It enables machines to understand, interpret, generate, and respond to human language. Common applications include machine translation, sentiment analysis, chatbots, and speech recognition.

Generative AI / ChatGPT 
A white person in a blue shirt sits at a desk with a laptop open, hands resting on the keyboard like they are about to type. Graphics are above the hands that read "Chat AI", representing the person interacting with Generative AI.
Chatbots like ChatGPT are an example of Generative AI.

Generative AI refers to AI models that can generate new data like the data its trained on. This includes text, images, music, and more. ChatGPT, a specific (and well known) application of Generative AI, is a language model developed by OpenAI that can generate human-like text based on the input it receives. It uses Deep Learning techniques to produce coherent and contextually relevant responses, making it useful for applications like conversational agents, content creation, and more.

AI is Awesome BUT It Has Some Setbacks 

Don’t Worry, AI is Not Going to Take Away Your Job

Artificial intelligence has some great benefits, like processing and analyzing data in minutes, but it’s not perfect. It’s still not HUMAN, it’s not you, and that is exactly why it won’t replace you in your job. 

Take a lawyer for example. AI has been known to complete the bar exam in the 90thpercentile. Awesome, right? But that doesn’t mean AI is going to perform better than an actual lawyer in a case. It just means the AI is better at answering the test because of its training with the text.

AI is very good and fast at anything with text, but it’s terrible at motor functions and mimicking a person in a non-scripted environment. Like with Generative AI – at some point I know you’ve seen or heard something created by it and have had the thought, “This is definitely AI.”

Ethical Components

As AI continues to be adopted and widely used, I believe it needs legislation around it. For example, do people need to state they are using AI for their work? Can they still claim it’s something created by them if AI created a part or even all of it? Who gets the credit – the person or the machine?

There’s also the concern with creators being miscredited or violations with copyright. There’s been plenty of cases or news headlines in which AI has learned from artists and essentially recycled their work in a slightly new form. Is that a form of stealing? 

AI Can Be Volatile

The development of AI is happening fast. Tomorrow, your current AI platform could be outdated. Things constantly change week to week. The features you might love now could be cut and replaced with something new. It can be hard for people and their own Technology Teams to keep up! And did I mention its hallucinations? Sometimes it likes to make up its own false information, so you should always doublecheck its results!

My Closing Thoughts

AI is real, it’s been here, and is both impressive and scary. It’s also very trendy to mention in any news article or headline, which is why you’re seeing it mentioned a lot. Nor, again, is it coming for your job anytime soon. While some articles may have you believe it will provide us world peace by the end of the year, it is still limited in capability. 

This is not to understate the need for legislation so that it is responsibly used, but rather, a presentation of the facts of AI’s impressive feats, and numbered flaws. It’s important to remember that while things like ChatGPT and Generative AI are newer, we have a long history of AI development, going back nearly as far as computers, and likely, further back than many of our lifetimes. 

Get More Content Like This In Your Inbox Learn More About Trinity Logistics Learn About Trinity's Technology

About the Author

Michael Adams is a Data Scientist I at Trinity Logistics. Adams holds a Master’s Degree in Data Science and has worked three years in the field, including his 2 years at Trinity. In his current role he focuses on applying Data Science techniques and methodologies to solve difficult problems, using AI to improve business outcomes, and supporting Trinity’s Data Engineering initiatives to improve quality assurance, ETL processes, and data cleanliness. 

Outside of his role at Trinity, Adams has a couple of personal Computer Vision and AI projects of his own! When he’s not tinkering away at those, he enjoys being outdoors, either hiking or kayaking! 

The following is an opinion article on AI in supply chains, written by Russ Felker, Chief of Technology (CTO) of Trinity Logistics.

Artificial intelligence (AI) continues to grow its presence in our everyday lives, businesses, and now, supply chains. In a recent MHI Annual Industry Report, 17 percent of respondents said they use AI, with another 45 percent stating they will begin using it in the next five years. And of more than 1,000 supply chain professionals surveyed, 25 percent stated they plan to invest in AI within the next three years. While AI in supply chains has its benefits, it continues to be overhyped as a replacement for human cognitive abilities.

AI in Supply Chains: We Need to Change Our Focus

The technologies leveraged by today’s AI offerings fall flat when applied to the complex day-to-day of supply chain interactions. We need to stop chasing the inflated promises of artificial intelligence and start focusing on the very powerful pattern recognition and pattern-application technologies marketed today as AI to support our teams more effectively. Instead of focusing on AI, we need to reorient on CAI (computer-aided intelligence).

Now, this might seem like a semantic argument, and to a certain extent it is, but the difference between artificial intelligence (AI) and computer-aided intelligence (CAI) is distinct. You might ask, “What does it matter if the technologies are being put in place and create efficiencies?”  “So what if it’s called AI?”  I would say it makes all the difference in the world.

What AI in Supply Chains Currently Does Well

First off, let’s talk about the technologies backing the products that include AI.  As with many technology implementations, they are, by and large, applying rulesets to data. Being able to quickly process a defined pattern against a large data set is both no mean feat and hugely beneficial in a supply-chain setting. In the end, however, these implementations are no different than a rules engine – albeit one with a high degree of complication. For example, take an area of the supply chain that has had this form of technology applied to it, quite successfully, for many years – route optimization.  

Optimizing a single route is relatively simple but optimizing the routes of multiple vehicles in conjunction with related schedules of item delivery commitments and layering in things like round-trip requirements and least amount of non-productive miles (miles driven without a load) and the level of complexity moves well beyond what an individual could do in a reasonable period of time.  What can take on this type of task is a processing engine designed to apply complex patterns within a given boundary set – and that’s what current implementations of AI can do. And they do it well.

Why AI Can’t Replace Humans

The first problem comes in when we examine the stated goal of AI – the ability for a machine to work intelligently. The difference between hype and reality is in how we interpret a keyword – intelligence. Even the most recent and hyped AI systems continue to fail at the same core intelligence functions such as understanding nuanced context and broader application of existing patterns.  

Take Gato from DeepMind, a division of Alphabet, as an example. While it can examine an image and draw basic conclusions, the context and understanding are both entirely missing from its analysis. Tesla provides another example where a driver had to intervene as autopilot couldn’t recognize a worker holding a stop sign as something it should avoid. These limitations minimize the tasks for which AI technologies can, and should, be leveraged.  

The second problem is related to the first. The acceptance of “AI” from teams has been wrought with, at a minimum, intense change management and, in the worst case, rebellion. If you are bringing in AI to a team, why wouldn’t they draw the conclusion that your goal is to replace them? To start down the path of both realistic expectations from senior management and more widespread adoption of technology, we must change the approach we take with stakeholders impacted by implementations of AI. We need to talk about CAI.

It’s Time to Set the Stage for CAI

Just the acronym alone talks to a much more practical and achievable marriage between a person and a computer. It’s not the computer that’s intelligent; it’s the person using the computer. What a computer can be taught to do, is to effectively deliver relevant information to a person at the time they need it based on their job function and recognized point in the process. So instead of using a technology such as a recommendation engine to pick a product you might like or a movie you’re likely to want to watch, let’s turn our focus to delivering salient business information to our people. We can effectively use analytics and machine learning to create data recommendations and deliver those recommendations directly to users in their primary applications at the right time in their process, so they don’t have to go find data in multiple reports or sites. Once a pattern is recognized, by people, and the data is organized correctly, again, by people, we can use things like machine learning and analytics to deliver that result set effectively and consistently.  

What this approach achieves is reduced interaction by a person and the machine reclaiming time for people to connect with customers outside of transactional conversations. By providing relevant data in-process, you make your team more efficient in their use of the system and create more opportunities for person-to-person interactions and relationships. The goal of any system implementation should be to reduce the time needed for a person to interact with it to achieve the desired result. This is different from having the perspective of the machine doing what a person does – which can be a misguided goal of AI. Instead, the system needs to be built to strategically leverage AI in areas that support the reduction of repetitive, rote work, enabling teams to focus on higher-value work.

A 3PL Focused on People

As a 3PL, a large part of our work tends to gravitate toward the identification and management of exceptions, but many times that is reactionary. We can leverage the technologies present today to enhance exception identification and management. Via AI-enabled supply-chain systems, information can be more present for teams to apply their intelligence, experience, and skill to solving issues optimally. The ability to recognize early in the life of a load the potential of a delayed delivery enables teams to make proactive adjustments with the receiving facility and the recipient. We can gather documents automatically and provide the information in a consumable fashion reducing the amount of manual effort to extract relevance from the documents.

As a 3PL we rely on two primary skills – intelligent use of data and building and maintaining relationships. Neither a computer nor an algorithm can do either of those alone, but a person backed by a Computer-Aided Intelligence system can. Creating systems that focus on CAI is what allows Trinity’s true source of intelligence, our team, to shine and deliver consistently phenomenal results for our customer partners. Now, you might be the exception and prefer to converse with a chatbot, but I’m guessing if you read this far, you’d rather talk to a person – which is what you get when you call Trinity – a person, backed by computer-aided intelligence systems, who is ready to do the work to create a relationship with you and deliver phenomenal results.

Learn how Trinity could benefit your business Stay in the know. Join our mailing list