How New Chip Innovations Will Drive IT

Technology never stands still, and computer chip innovations are no exception. It’s time to revisit IT’s strategic roadmaps for what’s coming.

Mary E. Shacklett, President of Transworld Data

May 25, 2023

5 Min Read
computer chip on a digital background
Edelweiss via Adobe Stock

At a Glance

  • Fairchild Semiconductor’s Gordon Moore
  • Energy-Efficient Processing
  • IoT Connectivity

Fifty-eight years ago, Fairchild Semiconductor’s Gordon Moore, who later co-founded Intel, posited Moore’s Law. Moore’s Law predicted that the number of transistors in an integrated circuit would double every two years. In 1965, when the prediction was first made, this seemed daunting. But the bar that it set has driven the semiconductor industry ever since, and the continued growth of chip capacities has spearheaded continuous advances in computing and IT.

CIOs keep their ears to the ground when it comes to semiconductor advances because they drive advances in hardware, software, and new business capabilities that IT leaders must plan and train for.

In 2023, that planning and training exercise is no different than in years before. New chip designs that are just coming online will drive a new round of IT innovations that must be penciled into IT roadmaps.

Here are some of the latest chip innovations and what they portend for IT.

Faster Parallel Processing in the Data Center

Enterprise demand for data processing units (DPUs) is on the rise, and for good reason. More IT workloads are shifting into analytics and big data processing, which requires parallel computing that can operate on many different streams of data simultaneously.

What the DPU does is offload CPU capabilities from a single, central CPU into many different “mini-CPUs” that are embedded in DPU circuits. Each DPU contains a CPU, a network interface controller (NIC) and programmable data engines. The DPU is designed to efficiently process network packets, storage requests, and analytics requests. Its data acceleration engines are tailored for the parallel processing of data, a prerequisite if you are going to process a lot of unstructured, big data.

As more DPUs are installed in data centers, CPU loads will become more distributed, and they will parallel-process. The impact on the data center is that more data, especially data that is big and unstructured, can be processed. DPUs will deliver new data processing capacities to data centers, but they will also force a rearchitecting of data center computing, which will become more decentralized, even within the data center itself.

From a hardware, software and a budgetary standpoint, the onramp of DPUs is likely to force upgrades that must be accounted for in IT budgets, along with explanations as to why upgrades are needed. IT operations personnel are likely to require training so they can run and maintain parallel-processing architectures. The DPU evolution won’t be lost on IT vendors, either. More software package vendors are likely to add enhanced analytics and even big data processing to their offerings.

Sustainable, Energy-Efficient Processing

Data centers contribute 2% of total US carbon emissions. It is noteworthy that when a government or industry mandate for progress in environmental sustainability is presented to companies as a condition of earning a contract, that CEOs and CIOs look first to the data center as the easiest way to show progress in carbon footprint reduction.

The recent movement to ARM (Acorn RISC Machines) chips is one example of how chips are being used in data center servers because they reduce energy usage.

Amazon now uses Graviton2 and Graviton3 64-bit CPU chips, which it internally designed and that use Arm Neoverse N1 cores. These chips potentially can reduce energy usage by 60%.

As CIOs present plans for server upgrades, energy savings will undoubtedly be used in the return-on-investment payback calculus as they present arguments to CFOs and CEOs for upgrades.

More Robust Mobile Computing

System on a chip (SOC) technology will make mobile computing more powerful than ever.

Apple’s M1 and M2 chip announcement in June of 2022 heralded a new era of the single chip that contains both the CPU and graphics processor unit (GPU), instead of the traditional two-chip deployment (separate CPU and GPU chips) that mobile devices had been using.

Being able to bring together CPU and GPU on a single SOC facilitates closer CPU-GPU collaboration, with both processors sharing the same memory. The result is an exponential increase in processing speed.

The emergence of ever-more powerful mobile devices will continue to bring more power and ubiquity to the enterprise edge. Architecturally, IT must be ready for this. It means more zero-trust networks and security at enterprise edge endpoints, an increase in the number of applications that are developed for mobile computing, and important decisions on how much data will be allowed to reside on mobile devices, versus in clouds or data centers.

IoT Connectivity That Can Ease Costs and Security Concerns

First developed in 2019, the IoT Matter protocol enables smart phones, sensors, and other IoT devices from an assortment of different manufacturers to interoperate and cross-communicate offline in closed environments such as a home or a factory. In other words, these devices don’t require internet access to cross-communicate. With Matter, they can communicate internally over Ethernet, Wi-Fi, Thread or Bluetooth.

The Matter protocol potentially can save companies money that they now spend on cloud and internet services, since Matter doesn’t use internet. There is also an opportunity to enhance edge society with Matter, since IoT devices are working together in an offline mode that has no public Internet access.

The cost savings and security protection potential should assist IT leaders in selling upgrades to Matter-compatible devices and endpoints at budget time.

AI Chips

Microsoft is developing its own artificial intelligence chip that will power the technology behind AI chatbots like ChatGPT. Amazon, Google, and others are also developing their own AI chips. These chips are designed to work in AI applications like machine learning, which depend upon rapid data pattern recognition to “learn.”

AI is already on IT roadmaps. The new chip designs will only speed AI adoption.

However, IT also needs to think about other issues. What impact is AI likely to have on the business? Which business (or IT) processes will be affected first? Will new business/IT process training be needed? What about the AI chips and technologies themselves? Does IT (and/or the business) have the talent on hand to develop machine learning and to train the AI applications?

All of these should be data points on the IT technology roadmap.

What to Read Next:

Chip Shortage Update

As Tech Layoffs Continue, Chip Foundry Plans Become Vital

4 Green IT Businesses Working to Reduce Computing’s Impact on the Environment

Read more about:

Chip Shortage

About the Author(s)

Mary E. Shacklett

President of Transworld Data

Mary E. Shacklett is an internationally recognized technology commentator and President of Transworld Data, a marketing and technology services firm. Prior to founding her own company, she was Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturer in the semiconductor industry.

Mary has business experience in Europe, Japan, and the Pacific Rim. She has a BS degree from the University of Wisconsin and an MA from the University of Southern California, where she taught for several years. She is listed in Who's Who Worldwide and in Who's Who in the Computer Industry.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights