Bank Of America's 'Why Stop There?' Cloud Strategy - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Leadership // CIO Insights & Innovation
11:05 AM
Chris Murphy
Chris Murphy
Connect Directly

Bank Of America's 'Why Stop There?' Cloud Strategy

Getting IT pros to give up old habits is one of the hardest things about building a new, private cloud architecture.

Why do we need different boxes for servers, storage, and network switches in the datacenter? They're all just computers, says David Reilly, who is the global technology infrastructure executive for Bank of America. Why can't companies fill their datacenters with white-box computers stuffed with x86 chips and a ton of memory, controlled by software that can make that box an in-memory storage device today, a software-defined switch tomorrow, and a server next week?  

This radical departure from today's datacenter approach isn't just idle salon chatter. Bank of America, this country's second-largest bank with about $2.1 trillion in assets, has a team of people right now exploring how to reinvent the bank's datacenters using a private cloud architecture.

The hardest part of getting to this kind of total reset of the datacenter, Reilly says, is persuading technologists to throw out their old ways of doing things and think more ambitiously. It's why Bank of America has created a separate team to develop the company's next-generation architecture, so team members could consider big ideas such as having only one type of hardware. "It's not the technical piece. It's: Why stop there, why not go further, why not do more?" Reilly says.

[Read how more clouds are moving from the concept phase to working: Private Cloud Adoptions On A Roll.] 

The bank wants that kind of blank-sheet thinking from its tech vendors, too. Reilly won't name vendors it's working with, but he says the team stood up two platforms for its private cloud environment, one proprietary and one based on OpenStack. The vendors it's working with are the ones embracing software-driven architecture and nonproprietary hardware.

"The hardware side of what they would do is something they should begin to let go," Reilly says. The bank is running its pilot on two platforms to keep its vendor options open, while "encouraging our large partners to feel like this is something of a burning platform that we need everyone to respond to."

Bank of America has about 200 workloads running on pilot versions of the new architecture, and it plans to put about 7,000 workloads into production this year. That volume still represents a small part of the bank's computing, but if it delivers strong results, it sets the stage for major adoption in 2015.

The business goal is to dramatically cut costs -- as much as 50% from today's best-case datacenter costs, Reilly says -- and let BofA respond more quickly to changing business needs, such as a spike in demand for network capacity or computing power (or, just as important, drops in demand when the bank wants less capacity).

Different technology, different skills
Bank of America's vision for a more flexible, responsive private cloud architecture is similar in concept to what some other companies, from FedEx to Fidelity, are putting into place. The application tells the infrastructure what it needs -- the computing power, the resiliency and recoverability, the geographic dispersion and restrictions, the security and regulatory requirements. The cost of all those elements would also be clear, so as business leaders work with developers to create apps, they understand the infrastructure involved and weigh the related benefits and costs.

BofA expects to provision and de-provision that capacity more quickly and in much smaller increments. Its private cloud also is meant to let some workloads eventually move to public cloud environments, be it Amazon Web Services, CenturyLink, Verizon, AT&T, IBM, or other third parties. Certain sensitive data and workloads will always stay on premises.

Image courtesy of geraldbrazell (Flickr).
Image courtesy of geraldbrazell (Flickr).

Changing to a private cloud architecture and a software-centric datacenter will require different skills. Infrastructure pros today define themselves by the gear they run: "I run my company's million-port Cisco network," or "I manage our 50,000 servers." In a cloud model, as those technical silos get blown away, infrastructure pros will need more software skills.

"The infrastructure professional will look a lot more like the software development professional," Reilly says. The bank will need to hire and retrain people. "You try to bring everyone with you. Some people make that journey, and some people don't."

Reilly shares these ideas in his calm British accent, but when he talks of the current state as a "burning platform" and calls the private cloud shift "very much a when and not if," his sense of urgency is unmistakable. "We think it's as big a move as the mainframe to distributed computing was," he says. "We think it's that big a shift in the industry. The opportunity for us is to get there first."

The race is on. In our InformationWeek Private Cloud report published in November, 17% of the companies we surveyed said they use private cloud for all apps, 30% for some apps, and 30% are testing or developing a private cloud. Only 23% said their companies had no interest.

What companies mean by "private cloud," of course, varies. Is it merely a highly virtualized datacenter that allows workload shifting? Or is it the kind of rethink Reilly lays out? "Everybody's looking at this," he says. "I just think we have a higher level of ambition."

Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The public cloud and the steam engine have more in common than you might think. (Free registration required.)

Chris Murphy is editor of InformationWeek and co-chair of the InformationWeek Conference. He has been covering technology leadership and CIO strategy issues for InformationWeek since 1999. Before that, he was editor of the Budapest Business Journal, a business newspaper in ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Apprentice
3/3/2014 | 6:14:44 PM
Virtualization without the headache of S/W overhead managing virtualized machines
Chris.. A good idea to commoditize and virtualize. Biggest hurdle is to standardize. Second hurdle build coherent fabric memory channel(s) which will allow stringing one server system to system, cabinet to cabinet.

The greatest failing of all x86 contenders was to copy the DEC Alpha and its highspeed crossbar architecture. Quickpath and Hypertransport have failed to connect server by server due to signal integrity issues. This copied failure to provide a coherent memory architecture has harmed the progress of clustering and virtualization. Looking at the crystal ball DEC probably did not see people clustering 1,000's of CPU nodes. Well neither did Intel and AMD. Instead of lowering the speed of the fabric by 8 times and increasing the copper serial lines by a factor of 8 they completely missed the possibility to commoditize white boxes through TRILL / fabric.. Having dedicated Systems IO, Memory IO, Analysis IO, and Management IO all address in TRILL / fabric. 

The electrical lenght of the fabric between x86 processors will not allow for a connection to the next server. The SGI UV system address this issue through it's unique memory architecture. It's TPC-C and TPC-H are untouchable by other virtual systems. In my humble opinion it is time for a change that allow memory and other systems resource aggregation. 

I believe If one actually does the research one can find that in many cases coping other work also means copying others mistakes. Dirk Meyer brought the DEC Crossbar to AMD. Intel copied AMD.. 
User Rank: Apprentice
2/24/2014 | 12:14:43 PM
Old ways vs New
As consumers,most of us easily moved away from our Blackberrys to iPhones, from our PCs to tablets, from keyboards to touch and the list goes on. The point is that embracing new ways seems to be intutive as consumers. In the enterprise world however, be it companies or individuals, the transition is so much harder because it is a matter of one's relevance and livelihood. Couched under part valid issues of privacy, security, compliance and performance, the whole ecosystem pushes back to protect incumbency. As we can all see, it has taken almost 8-10 years for 'Cloud' to reach a tipping point in the enterprise (AWS and Salesforce have been around for that duration). The BOA scenario is probably just 1 example of what is playing out in many enterprises and there is still some ways to go. Although technically and economically, the benefits of a 'Cloud' strategy are obvious, the 'people' situation, where one group will win big and the other will lose big is not a sustainable state. In addition to 'Technology and business experts', time also for 'HR leaders and career mentors' to weigh in.        
User Rank: Author
2/6/2014 | 3:05:58 PM
Re: Standardization: Love it or hate it, it makes sense
You make a great point -- the skills might be a bigger obstacle than the organizational challenge. Reilly talks about training and hiring to get these software-oriented infrastructure pros. 
User Rank: Author
2/6/2014 | 2:59:13 PM
Re: Microsoft has designed the forerunner
Bank of America found these traditional roles and silos to be an obstacle, that people think of themselves as networking, storage, compute experts. Reilly says this wasn't a known obstacle going into this, it has found that along the way. It's one reason it created a separate team. So is creating a standalone team the only realistic way to make this move to software-defined data center, or can companies knock down the silos sufficiently within existing organizations?  
User Rank: Ninja
2/5/2014 | 1:12:23 PM
Standardization: Love it or hate it, it makes sense
The first thing that comes to my mind is what happens when the first wave of BoA employees modernize all legacy applications into either environment (OpenStack and/or what I am guessing starts with VM), and move on to bigger, more lofty positions in other firms.  Who can possibly manage and keep afloat all these new applications?  Skillsets to do this type of thing are thin enough, imagine in a few years?

That being said, I think the idea of standardization makes perfect sense.  It's like having a giant box of lego and re-purposing blocks as needed.  Move loads internally or externally, it's beautiful, especially if you can set up auto provisioning to spin off new VMs as existing loads get a little heavy.

I'm curious to see if this model takes off in other firms.  It's a little like when we see all the creative uses for Raspberry Pi, hardware is irrelevant, it's all about how its used.
User Rank: Apprentice
2/5/2014 | 8:44:34 AM
Re: Microsoft has designed the forerunner
I beg to differ. A modern data center architecture should not focus only compute, in whatver size and shape. It is about virtualizing everything from compute, over network to storage. And even more important to change operations from a classical siloed model to a center of excellence / cloud operations approach with high levels of automation. These organizational changes make it hard and not so much the technology, thats why many organizations still shy away from doing the right thing. But with business building more and more pressure on IT we will see many more such Software Defined Data Center projects - by the way a term VMware has coined in 2012.
Lorna Garey
Lorna Garey,
User Rank: Author
2/4/2014 | 5:00:33 PM
Re: IT roles
It's called DevOps, or "revenge of the code jockeys" -- who for many years took grief from data center admins.  
User Rank: Author
2/4/2014 | 2:07:33 PM
IT roles
This view of infrstructure pros becoming software pros echoes a sentiment I heard while visiting EMC yesterday: The roles of system admins, storage admins and networking admins are merging into one role. We're not there yet, but what happens at your company by the time of the next round of applications?
User Rank: Strategist
2/4/2014 | 12:44:58 PM
Microsoft has designed the forerunner
Bank of America's David Reilly is right. The computing device of the future will be configurable. And software that recognizes the device will fit it into the overall data center operation. Microsoft's cloud servers take a giant step in this direction. They're 12u rack mount units that can be configured as compute intensive or storage intensive at the last minute of production, before test and shipment. that leaves lots of capacity near the end of the production line, available to be shipped at a moment's notice. Intel is encouraging the same sort of thinking in switches and will do everything it can to enable it. Better watch the Open Compute Project for future developments. 
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Where Cloud Spending Might Grow in 2021 and Post-Pandemic
Joao-Pierre S. Ruth, Senior Writer,  11/19/2020
The Ever-Expanding List of C-Level Technology Positions
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/10/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Flash Poll