How well is business technology serving your business?

As your organization starts to scale, bare-bones IT support can be a blocker. Are you outgrowing your “startup IT”? Technology solutions drive growth. Do you know what your next steps should be? 

  •  Does security still feel like an afterthought?
  •  Is your day-to-day IT support disorganized and reactive?
  •  Are departments solving their own tech problems (or living with them)?
  •  Are processes, projects, and services (and invoices) piling up with no top-down plan?
  •  Would you like to stop outsourcing and bring IT support (and knowledge) in-house?

Your company probably needs a strategic, forward-looking IT plan.

My Services

I provide a suite of IT consulting and interim leadership services geared toward helping startups and growing companies better understand, scale, and mature their corporate IT function.

Contact me today to discuss your needs and how I can help.

My Expertise

In my 30+ years building and managing IT at growing, geographically dispersed startups across various industries, I’ve seen it all. I’ve created IT roadmaps and successfully managed countless technology changes to align with company goals and avoid common pitfalls, using real-world resources and budgets.

Technical Debt: What do Business Leaders Need to Know?

First of all, what is technical debt?

Well, imagine I lose a bet, and I owe you a robot.  That is a form of technical debt.  But that’s not what we’re talking about.

The technical debt we’re talking about is basically anything in your company’s tech environment that needs to be fixed, updated, or replaced, but would be painful to do so – too much cost, time, or disruption – so it’s still there.  It’s the IT equivalent to “deferred maintenance” on a house, like that leaky foundation the owner fully intends to fix, later… soon… next year.

In an ideal world, technical debt would be addressed before it becomes a problem.  In the real world it tends to be addressed when (and because) it becomes a problem.  Once it’s urgent, you find a way.

Of course, to even start addressing your tech debt you have to be aware of it.  Business leaders may have a good understanding of what technical debt is, yet still not know how or where it’s affecting their company right now.  Unfortunately, technical debt is often regarded as “small-picture” operational stuff, and that’s one reason it accumulates.  Given finite resources, business priorities usually get more attention than operational priorities.

Tech debt: two buckets

So for starters, it’s helpful to look at tech debt as falling broadly into two buckets.  Business leaders are usually aware of bucket #1 but not #2:

  1. “Front end” – This type of tech debt inflicts operational pain that everyone can see.  It could be a faulty system for getting laptops to new employees, or an error-prone, manual accounting process, or some business platform that can’t scale.  Anything that’s presenting a visible roadblock to business progress.
  2. “Back end” – This is the stuff that keeps IT people awake at night.  Cracks in the dam that your end users have no knowledge of, yet.  It could be an old server with no support contract, an inability to modify some critical-yet-ancient application, or a badly designed database that’s just going to get slower and slower.

Business leaders would be wise to make sure that a mechanism exists for regular dialog with the IT team for, well, for a lot of good reasons, but one of them being identification, documentation, and assessment of the above risk categories.  Because, at the end of the day… 

Technical debt is business risk!

There are times you have to take a step back to move forward.  Fixing tech debt is a good practice partly because it almost always leads to implementing better business practices.  On the flip side, the more your technical debt builds up, the more it introduces both security and operational risk:

  • Security risk – because the longer an outdated system or process is in place, the more likely that it’s falling behind on security capabilities.
  • Operational risk – because the longer an outdated system or process is in place, the more day-to-day operational roadblocks it will present, the greater the risk of downtime, errors, or data loss, and the more disruptive the eventual fix or replacement is going to be.

But my company is a startup – that means I don’t have any technical debt, right?

Sort of.  It’s true that your startup probably doesn’t have “legacy systems” or processes that date back to the Eisenhower administration.  BUT… this is important: the startup phase is a notorious breeding ground for future tech debt!  What I mean is, If your team is slapping together bare-minimum IT systems just to get your operation off the ground, promising they’ll circle back and fix everything when things calm down… spoiler alert, things won’t calm down.  You’ll only end up fixing these systems when they turn into problems later.  For this reason, if you’re not building your “startup IT” on solid, scalable, best-practice foundations, today’s ad hoc startup IT has an uncanny habit of becoming tomorrow’s technical debt.


What are some common examples of technical debt?

  • Using systems that are no longer suitable for the task, can’t scale, or can’t integrate – these are common issues that become apparent as companies grow
  • Failure to adopt new technologies – Sometimes it’s not about what you’re doing, but what you’re not doing that your competitors are doing.  This category existed before AI, but AI is the perfect example.
  • Continuing to use “quick fix” or band-aid solutions indefinitely
  • Continuing to use unpatched systems – “Patching” means applying vendor-supplied updates to fix bugs or security vulnerabilities.
    • If you don’t patch a particular system – the longer it gets behind in its patching, the longer it will take to update, and the more likely that cumulative patches may introduce unexpected problems.
    • If you can’t patch a particular system – for example, because it’s now “end of life” and no longer supported, or it’s dependent on some other system that’s incompatible with the patches – the system probably needs to be replaced.
  • Continuing to use systems that were built on old platforms – this most often refers to custom applications, for example a 20-year-old home-grown business system developed in COBOL with poorly-written, undocumented code and lacking security.  It’s a productivity roadblock, a support nightmare, and a security disaster waiting to happen.  But… it does its job and would have to be completely rewritten!

And don’t forget business processes

Technical debt affects not only systems – it can also plague your processes and your data.  Some examples:

  • Processes that are manual but could (and should) be automated
  • Data that is siloed but should be shared between systems
  • Integrations that depend on clean, normalized, easily accessible data, but your data… isn’t
  • Any process that’s not adequate to support your desired business capabilities

These are difficult problems because solving them usually means undertaking large projects.  It’s human nature to postpone these because there are usually many more interesting and urgent issues that you want to apply your limited resources to.

How can a small company better keep track of, quantify, and prioritize their technical debt issues?  Well this is where I mention I can help 🙂  There are ways to incorporate technical risks into your overall business risk management, and to structure your process for overseeing corporate projects and change, so at least your major “tech debt” issues have visibility, they don’t slip through the cracks, and are able to be addressed appropriately according to the level of risk.

Cybersecurity: What is Zero Trust and Why do I Need it?

Let’s start with what Zero Trust IS.  You could consider it a security model, a strategy, a framework or a philosophy – let’s just say it’s a coordinated set of security methodologies designed to achieve the goal of “never trust, always verify”.  The idea dates back to 2010 when it was first developed by John Kindervag at Forrester Research.  

OK, great!  What does “never trust, always verify” mean in the real world?  To understand why this is even an issue, we need to look at how cybersecurity is configured in “traditional” organizations.

In olden times (before say 2010-to-2015-ish), organizations had a perimeter.  This means there was an office network with a pretty well-defined “inside” and “outside”.  If you had a firewall to protect the perimeter against outside threats, plus a VPN for securing outside connections, and antivirus on every computer in case anything got through, you were good.  OK yes, this is a simplification – you would follow other security best practices too – but there was always this inside-outside mentality, and connections inside the network were largely trusted.  (Think of this as the castle-and-moat model.)

But nowadays, there is no perimeter.  Some organizations don’t even have an office.  How did we get here…??

  • Everyone can work remotely: The Covid pandemic forced, and technology enabled, a massive shift to remote work over the last 5 years
  • Everything can be in the cloud: Online/SaaS applications, and public cloud infrastructure, have matured to the point where businesses don’t need any physical servers.  Today, many organizations, especially smaller and newer businesses, have no on-premise servers at all.

So if your people and your servers are not in the office, and all your stuff is connecting to all your other stuff across a giant internet spider web, what is your perimeter?  The perimeter becomes each individual user identity, and each individual connection to each resource, wherever or whatever it may be.  THIS is the mindset that Zero Trust introduces.  (Think of this as the guard-at-every-door, everyone-wears-a-suit-of-armor model.)

And this is why adoption of Zero Trust has really taken off over the last 5+ years.  An entire new industry of Zero Trust tools and technologies has emerged. One recent report estimates that 81% of organizations have now fully or partially implemented a Zero Trust model, and in 2021, the Biden administration mandated that all US Federal agencies meet a certain level of Zero Trust maturity by 2024.

So what do you really need to know about Zero Trust?  Here are some of the basic principles and associated technologies.

  1. Assume breach
    This is a logical extension of the “never trust, always verify” principle.  “Assume breach” leads to an approach that assumes the bad guys are already “in”.  Because they are.  There are no locks on the gates to the public internet.  So if your people are mingling in the same cyber-space as the bad guys, how do you configure things so that your people can access your stuff but no one else can?

  2. All communications and access requests are secured 
    Every connection needs to be authenticated and encrypted, and all access to every resource needs to be controlled on a per-session basis.
  1. ZTNA vs. VPN
    Yes, we are now officially characterizing VPN as an old-fashioned, “legacy” technology.  ZTNA (Zero Trust Network Access) is a newer technology that provides much more granular control over each individual access request to each individual resource wherever it is – inside or outside your network.

  2. Microsegmentation
    This is related to the ZTNA concept, though it can be packaged as a separate technology (called ZTS, or Zero Trust Segmentation).  The idea of microsegmentation is that it aims to replace “legacy” methods of internal network segmentation that use firewalls, switches, and routers.  Instead of creating broad network segments (divided by office, department, building, etc), microsegmentation creates highly granular, individual “segments” for every communication request to every application or resource within a network.

  3. Using best practices for account management
  • Least privilege: this is the big one, and it’s not a new concept, but a Zero Trust relies heavily on the practice of giving every connection to every resource only the level of access permissions it needs, and nothing more.
  • Adaptive access: this means access requests are managed based on context, such as a user’s location, the time of day, the computer they are accessing from, the state of that computer, the user’s baseline behavior pattern, and so forth.
  • PAM: This stands for Privileged Account Management, which refers to administrator access, which has its own separate set of best practices due to the elevated privileges that system administrators must have to do their jobs.
  • MFA: Zero Trust Architecture relies heavily on a variety of Multi Factor Authentication methods
  • Strong passwords or elimination of passwords: beyond requiring strong passwords, it’s becoming more common in the Zero Trust world to eliminate user passwords altogether in favor of biometrics or digital certificates.
  • Log, audit, and automate as much as possible!  (Self-explanatory)

Zero Trust architecture does continue to rely on other traditional security controls as well, like endpoint protection (antivirus) and patch management. And it’s important to note that there is no single product or system that provides all of the above!  Although there is now a large ecosystem of tools to choose from, implementing Zero Trust requires not only making a substantial shift from the traditional security mindset, but a mix of new tools, methods, and technologies too.

This is the part where I mention I can help 🙂  If your company is considering integrating Zero Trust principles into its cybersecurity stance, but you’re not sure where to start, please reach out.

Finally, if you’re interested in learning more, here are a couple of additional resources:

NIST SP 800-207 (Zero Trust Architecture) – NIST’s Zero Trust guidelines, first published in 2020

CISA Zero Trust Maturity Model – updated in 2022 

Small IT Departments: How Do You Demonstrate Your Value?

As cost centers (not revenue centers), IT departments often struggle to demonstrate the value they provide.  In startups or very small companies, IT “value” is often not even on the radar.  It’s just, let’s get an internet connection and some laptops and email addresses.

Even as you grow, it’s usually taken for granted that IT will provide your company with functioning technology.  And yes, of course there’s value in having working laptops, networks, business systems, and security.  But even when IT starts tracking metrics like help desk response time, security incidents, or system uptime, it’s tricky to assign dollar values to speedy help desk resolutions, or to outages or security incidents that never happen because IT prevented them.

There are other ways of looking at “IT value”.  For most organizations, I think the better metrics are the ones that focus on business outcomes.  The most obvious example here is when IT projects support company goals; these types of outcomes are pretty easy to quantify.  For example, your ERP project was completed on time, allowing finance to manage multiple business entities, which supported the company’s goal of international expansion.

But another important business outcome that’s commonly overlooked is employee engagement

“An employee’s ability to get work done every day is tied to technology—but these tools also shape how an employee feels about where they work.” (Citrix EX study, 2024)

Why is employee engagement important?  The Covid pandemic was kind enough to give us a crash course on this!  It ushered in a bold new universe of work-from-home and hybrid work models.  These models have provided a ton of benefits, including better work/life balance and reduced office rent costs.  But one not-so-great side effect has been higher turnover.  Employees who rarely or never come into an office have a weaker connection to the organization and are less “sticky” – more likely to move on to new pastures that seem greener.  How do you improve employees’ “stickiness”?  Well, ask your HR team, because they’ve been busy trying to find new, creative ways to drive employee engagement since 2020!

It sounds simple, but improving employee engagement will benefit your business in all sorts of ways, and may be more valuable than you think.  Let’s go to the studies…

  • Employee Engagement: Pay, perks, and bonuses all had an impact but were not as important as individuals’ sense of making progress in their work  (Harvard Business School, 2011)
  • Improved Retention: Employees are 230% more engaged ​and​ 85% more likely to stay beyond three years if they feel they have technology that supports them (Qualtrics, 2021).  
  • Reduced Turnover: An increase in turnover from 12% to 22% reduces productivity by 40 percent and financial performance by 26 percent (Journal of Applied Psychology, 2013).  With engaged employees, organizations experience 59% less turnover (Gallup, 2020)
  • Improved Customer Satisfaction: study showed that engaged employees generate 81 percent higher customer satisfaction scores. (Journal of Applied Psychology, 2013)
  • Increased Discretionary effort – This refers to effort expended by employees outside their job function.  95 percent of employees reporting a positive experience with their company say they expend discretionary effort. The number drops significantly for employees reporting a poor employee experience (IBM, 2018)
  • Flow State – Executives reported being five times more productive while in flow.  If you could increase the amount of time in flow by 20 percent, workplace productivity could double. (McKinsey, 2013)

Your IT team definitely can, and should, contribute here.  IT is one of the few departments (along with HR) that interacts with every employee.  IT is in a strong position to promote a company’s sense of community with a welcoming onboarding process, good customer service, and strong ongoing communications.

IT is also in a position to develop cross-functional partnerships to identify processes that are too “painful” and manual, and to implement improvements, automations, and tools that make these processes more efficient.  For example, employees on the Sales team gain back 5 hours per week – time formerly spent on some tedious task – which can now be spent on higher-level customer interactions. 

When you realize IT’s potential for improving employee productivity and job satisfaction, this can contribute to better employee retention, and by extension, lower recruiting and training costs, improved team stability, knowledge retention, and a better customer experience for your clients.  

So if the IT department can show it’s doing all THIS for your company, well, now we’re getting somewhere, right? Viewing IT’s value through the lens of business outcomes, it helps us better define what we mean by “functioning technology ecosystem”.  An IT team that’s really good at maximizing employee productivity is going to be seen as a lot more valuable than a team that’s really good at putting out fires.

And this is the part where I say, this is where I can help 🙂  I can help your organization evaluate the technological component of the overall employee experience, and develop a roadmap for improvement. 

It’s all about:

  1. Understanding your organization’s operational pain points
  2. Staying on top of user sentiment
  3. Prioritizing projects that address high-impact needs and user feedback
  4. Quantifying the benefits
  5. Communicating (“Here’s what we’re working on and why”)

It’s not always easy, but if this is your focus, you’ll probably not need to struggle so much to demonstrate IT’s value to your organization.

In Small Companies, is IT the Most Neglected Department?

 

OK, it’s not a competition or anything, but If you work for a startup or small company, is there any department that’s less developed than IT?  

If IT is the most “neglected”, it does actually make sense from a business point of view when a company is first starting out.  Temporarily, at least.  For a few reasons:

  1. Your business systems are simple early on.  You have a relatively small number of systems and they’re probably all online.  Your headcount is manageable.  You have no legacy systems, no technical debt, and a small (but growing) volume of data.
  2. IT isn’t a revenue driver.  Company leadership is focused on building the business.  Other, more urgent functions get more attention at first (Product, Sales, Marketing, Recruiting, Finance).
  3. Compliance may not be an issue yet.  Security and compliance are always important, yes, but really only become business critical when you have customers.
  4. IT is technical.  IT does a lot behind the scenes that many in the leadership team don’t necessarily know about or need to understand.  Your business people are busy.  Often, small companies are not even sure where IT should live on the org chart – nobody may really want IT but it has to report up to someone.  In my career I’ve reported to everyone from the CEO to the COO, CFO, CTO, VP of Engineering, etc.  There’s usually an informal agreement in place between Leadership and IT that basically says “you got this, right?”
  5. IT people tend not to be business people.  It’s not just that IT is usually one of the last teams to get a “V” or a “C” leader.  It’s also that IT people below the leadership level tend to be oriented toward technology and process and not toward business strategy (to make a broad generalization.)

So it’s not a problem then?

Well consider this: if internal IT is not a business driver but a business supporter, you’re going to want a proper IT framework in place as soon as is reasonably practical in your growth trajectory, so that IT will be in a position to support the business in its future state.  As you scale, you want to have confidence that IT is going to be a strategic partner in your growth and not always just scrambling to catch up.

Also – spoiler alert – IT is becoming more of a business driver than it ever has been.  This is partly because of evolving compliance obligations but mostly because of AI.  As technology “runs” your business more and more, your business will increasingly need a coherent technology strategy.

The alternative is, your ad-hoc “startup IT” can persist well past your startup phase.  Leadership can get accustomed to IT being “low visibility”, your IT processes and systems can fall further behind, and at a certain point this becomes a blocker to your growth.

Specific problems with keeping your “startup IT” into your scaling phase:

  • Strategy gap – you’ll probably have no one thinking about internal IT and technology strategy as it relates to your business strategy.  Technical solutions will be implemented in a fragmented, bottom-up, duplicative way that won’t scale well.
  • Employee experience – your staff could suffer productivity, job satisfaction, and turnover problems.
  • Expectation gap – if the organization never defines or builds a structure for what it expects from IT, this will scale poorly and you’ll end up not getting what you expect from IT.
  • Project & change management – once your business starts to scale, managing change can suddenly become your biggest operational challenge.  Technology will be at the center of it.  Is your current IT ready to face this challenge?
  • Risk and security – you’ll likely be unable to manage technology risk if you have poor visibility into your internal technology.  Comprehensive, centralized information security controls don’t just happen!  Someone will have to map out, oversee, implement, and track them.

This is where I mention that I can assist with all of the above!  Reach out if your company is embarking on a growth phase and you need help structuring and positioning IT to support that growth.

Quantum Computing – How Excited, or Worried, Should I Be?

Quantum computing is coming!  What do business leaders need to know?  Well, you have nothing to worry about unless your business relies on cybersecurity in any way.  In that case, yes, there are very real concerns.  OK, that was a trick question.  I’ll get to the cybersecurity implications further below!  But first I wanted to talk a little bit about how cool quantum computing is.

(Spoiler, just to get this out of the way: practical, real-world benefits – and dangers – of quantum computing are at least several years away, and for some applications, decades.  But we do need to be planning today for the anticipated cybersecurity dangers.)

A QUICK PRIMER ON QUANTUM COMPUTING

Instead of using bits – which are represented in “classical computing” by tiny little transistors or on-off switches – a quantum computer uses qubits.  Processing information in a qubit relies on manipulating and observing the properties – or “quantum states” – of sets of subatomic particles.

In classical computing, a bit is binary.  That means it can be in only one of two states: 0 or 1. By contrast, a qubit can exist in an infinite number of possible states between 0 and 1, and it’s this property that quantum computers will exploit to allow a degree of parallel processing that will increase computing speeds to almost unimaginable levels of millions or even billions of times faster (theoretically) than classical computers.  For certain types of tasks.

The even more fascinating part is, these subatomic particles appear to violate conventional physical laws!  For example, the particles have a property called “entanglement” that is key to quantum computing.  Entanglement is a phenomenon that connects the state of two or more particles in a way that makes them behave as a single object.  Meaning, you can know the state of particle #2 just by observing particle #1, or vice versa, and this is always true even when the particles are separated by a great distance.

Also, a particle can exist in multiple states at once, until its state is measured, at which point it “collapses” into a single state.  (Remember Schrödinger’s cat?)  And the very act of measuring particle #1 instantaneously causes the outcome to be reflected in particle #2, no matter how far apart they are physically.  This behavior suggests faster-than-light communication – considered impossible in conventional physics – and calls into question our very understanding of distance and time!

This confused even Einstein, so don’t feel bad.  Einstein theorized that there must be hidden variables within each entangled particle, so that each particle contains the full information about its state and the other particle’s state, and therefore no “communication” is necessary.  But this theory was later disproved.

(There’s a lot more to this, of course, but suffice to say quantum physics cannot be explained by conventional physics.)

WHY IS THE DEVELOPMENT OF QUANTUM COMPUTERS SO SLOW AND DIFFICULT?

It’s mainly due to the difficulty in getting small sets of subatomic particles perfectly isolated, so they are not affected by their environment yet can still be reliably manipulated and observed.  And then, to scale a quantum computer by “chaining” qubits together just multiplies the challenges inherent with these fragile particle sets.

SO ONCE WE HAVE USEFUL QUANTUM COMPUTERS, WHAT WILL THEY BE USED FOR?

As an emerging technology, quantum is a bit like blockchain in that it’ll be sort of a precision instrument for targeted uses, not suitable for all computing scenarios.  But in the areas where it excels, it will really excel.

In a nutshell, conventional (“classical”) computers will still be more efficient and cost-effective for everyday tasks like, well, most of what you do on your laptop – running browsers, documents, spreadsheets, audio, video, storage, retrieval, etc.

Where will quantum excel?  It’s going to find a home with tasks that involve massive parallel processing of complex calculations, or huge numbers of iterations or possibilities, but that are also relatively simple to define and input (due to the complexity of feeding instructions to quantum computers).  Some examples:

  • Simulating molecular interactions for drug discovery
  • Cryptography (both breaking conventional encryption and leveraging quantum encryption)
  • Financial modeling
  • Process optimization (logistics, traffic, manufacturing, etc.)
  • Material design
  • Solving complex linear equations
  • Factoring large numbers

HOW ADVANCED ARE QUANTUM COMPUTING CAPABILITIES TODAY?

The largest quantum computers in existence today contain a little over 1000 qubits.  But at this size, they are still mainly used for research and development.  They’re not yet capable of the large-scale, fault-tolerant computations required to solve complex real-world problems.

Also, the large cloud providers (Microsoft, Google, Amazon) are already offering quantum computing “as a service”, but again, development is still in very early stages and the uses are still mostly experimental.  Practical, reliable, cost-effective usage is not imminent. 

Routine use of quantum computing for drug development, for example, is considered to be at least 10 years out.  But eventually, businesses in certain fields – like biotech, pharma, chemistry, physics, materials science, and finance – will be able to build entire value propositions around it.

WHAT ABOUT USING QUANTUM WITH ARTIFICIAL INTELLIGENCE?

I know that’s what you’re thinking.  AI is already taking over our lives – what’s stopping “quantum AI” from basically taking over the universe?

The answer, I think, is only time.

Quantum neural networks (QNN’s) are already under development, although (again) still in very early stages.  As this technology matures, at least at the beginning, quantum AI is likely to exist in a hybrid configuration where a quantum computer optimizes specific layers of a neural network, while the rest of the network is processed on classical computers.

But, in summary, yes.  It’s easy to imagine a state in the more distant future where fully realized QNN’s enable AI capabilities that far exceed today’s.

YOU MENTIONED CERTAIN CYBERSECURITY IMPLICATIONS?

Yes.  You’ll hear the term “Q-Day” tossed around.  This is the theoretical day in the medium-future when quantum computing renders many of today’s common encryption methods obsolete by being able to defeat their cryptographic algorithms in less than 24 hours.  Some experts estimate that “Q-Day” will arrive within 5-10 years.  

But even if the technology takes a while, your data could be subject to a “steal now, decrypt later” vulnerability.  This highlights the importance of having “post-quantum cryptography” (PQC) methods in place before the available technology requires them.

NIST has been actively developing and releasing standards for post-quantum cryptography.  Though the threat is still several years out, every business should begin tracking this as a business risk and planning their migration to PQC.  For small companies, this may entail confirming that your systems and service vendors are migrating to the new encryption algorithms, and testing and deploying system upgrades as they become available.  Many commercial products already support PQC (Google Chrome, for example).

——————————————-

Well, I’ll leave it at that!  And finally, this is the part where I remind you I can help your small business strategize on how best to leverage emerging technologies!

More IT Trends for 2025: Where Are We Going to Put All Those Servers??

As a continuation of our future of IT conversation that was focused on AI, I wanted to talk about a few interesting infrastructure trends that we see heading into 2025.  (Disclaimer: I can’t promise not to mention AI 🙂)

CLOUD VS. ON-PREMISE

First off, did you know there’s currently a tug of war going on between “cloud” and on-premise systems?  In the age-old battle between opex and capex, is capex making a stunning comeback??

During the height of the Covid pandemic, a lot of companies accelerated their rush to migrate their full server infrastructure to the cloud.  This was partly by necessity (there were fewer on-site employees to manage servers), and partly a logical conclusion (if everyone is working at home, why do our servers need to be in any specific physical place?)

Well, fast forward a couple of years: cloud sprawl has occurred, the cloud bills are piling up, and many organizations are moving back to “hybrid” strategies.  This is not only due to cost – especially with the newer resource-heavy AI workloads – but also due to privacy, compliance, and performance concerns.  Today, organizations are more likely to keep (or migrate) resource-intensive or highly sensitive systems in house.  So this is definitely a consideration for any companies developing cloud strategies today.  

(NOTE: when I say “in-house” I’m including “private cloud” configurations where you own the hardware and run it in a private data center.)

DATA CENTERS ARE BOOMING

Speaking of data centers, good luck getting space in one!  If you need it, start planning and shopping now.  Data centers are not only making a comeback, they are running at all-time capacity and new ones are being built at an unprecedented rate.  Giant “hyperscale” data centers, which can occupy several million square feet and consume hundreds of megawatts of power, increasingly dominate the industry.

The main reasons for the current data center boom are predictable:

  1. The AI gold rush – yup, AI requires a lot of computing resources.  It needs SO much computing resources and generates so much heat (up to 5 times that of “normal” servers) that server rack density has to be reduced, requiring more physical space and much more power to run and cool.
  2. Cloud computing – the major cloud providers (Azure, AWS, Google, Meta, etc.) are not going anywhere.  In fact they are the main builders of hyperscale facilities.

ENVIRONMENTAL IMPACT AND SUSTAINABILITY

Though precise data is not available, the consensus seems to be that AI hasn’t yet surpassed cryptocurrency in global energy consumption, but at its current pace, this could happen by 2026 or 2027.  And of course AI’s energy requirements will only keep growing from there.  Sustainability is a major concern!

How will we mitigate AI’s huge thirst for power?

  • Use renewable energy – some hyperscale data centers are already powered 100% by renewable energy, such as two in Nevada run by data center provider Switch.  They achieve this by using a combination of dedicated solar power stations and partnerships with local and state utilities to purchase renewable energy.
  • Use AI itself to automate energy optimization – as with cybersecurity, AI is being called upon to combat a problem that AI itself created: while it consumes huge amounts of energy, AI systems are also used to anticipate system trends and optimize energy usage.
  • Improve hardware and cooling technologies – the major chip makers, like Nvidia and IBM, are actively working on new generation hardware that will use up to 75% less energy.  And switching from air cooling to liquid cooling technologies can reduce overall data center energy consumption by more than 10%.
  • Improve AI model efficiency – This has its pros and cons…
    • PROS: for starters, technologies like China’s DeepSeek LLM, which requires much fewer resources to train, will reduce the energy used per application. And “small language models” (SLM’s) are increasingly replacing “large language models” (LLM’s) for specialized AI applications – which, when you think about it, is most AI applications.  Specialized SLM’s require less resources to run and train than their more generic LLM counterparts.
    • CONS: on the flip side, improving the efficiencies of the models may just allow more processing to be squeezed into the same computing facilities, so this might end up being a wash in terms of energy consumption.
  • Repurpose the heat – it’s possible to transfer “waste heat” from data centers directly into urban heating systems.  As you might imagine, this is especially useful in cold places, like Helsinki, Finland, where it’s already being done.

And finally, this is the part where I let you know I can help. Just reach out if you need advice on infrastructure strategies — or any growth or IT org strategies — for your small company.

The Future of IT in the Age of AI

 To kick off 2025, I originally planned to write about the future of IT in general.  But I quickly surrendered to the fact that, at this moment in time, the future of IT really mostly boils down to two letters: AI.  Since we know AI is going to continue to advance and extend its tentacles into everything, the question is not “will it happen”, but rather “what will it look like?”  So that’s what I’m going to explore here.  Focusing on internal business operations, I’m going to lay out some AI predictions, trends, and bits of practical advice.

WHERE IS AI TAKING US?

I’ll start with the heavy stuff: be prepared for AI to drastically transform the very nature of business over the next 10 years or so.  By 2035, certain aspects of the way businesses operate might not even be recognizable to our 2025 selves.  I believe today’s business leaders need to assume that a very drastic AI transformation will play out, and then work backwards from there to develop at least a rough roadmap for your business.  If the drastic scenario doesn’t play out?  Great; you’re ahead of the game.  But if it does play out and you have not prepared?

So… let’s paint a picture of what this extreme AI-powered business scenario might look like ten years down the road.  Put on your time travel helmets – you are now in the year 2035:

  • AI PREVALENCE IN 2035: The rush to embrace AI was universal.  In most industries, competitive survival required it. AI-powered businesses are now the norm.  Autonomous AI agents handle a significant portion of day-to-day operations with virtually no human interaction; these agents communicate directly with each other and carry out a wide variety of decisions and tasks behind the scenes.

  • WORKFORCE IN 2035: Starting back in the early 2020’s, AI steadily improved its ability to enhance employee productivity in new and better ways.  Its problem-solving became increasingly sophisticated.  But early on, employees also realized that AI’s ability to mimic – and ultimately displace – humans could pose a threat to their livelihood.  And they were right.  Permanent employees have been significantly displaced by AI systems and freelance specialists.  Now, AI even handles most business decisions!  This has resulted in sharp reductions in human leadership teams.  Of course many roles are still primarily performed by humans.  But the hottest skill sets are now in the development and management of technology systems.  And almost every employee, no matter their role, has had to reskill, upskill, and become much more AI-literate.
  • WORKER ENHANCEMENTS IN 2035:  Most employees have an AI assistant.  Wearable technology for employees is now commonplace, including things like VR glasses, controller gloves, and exoskeletons.  But, more significantly, some companies have now begun to test computer chip implants in human bodies.  These implants are designed to augment an employee’s cognitive capabilities related to their work tasks.  This practice has raised serious ethical controversy because it will ultimately put those who are unwilling or unable to do it at a disadvantage in the workforce.

  • ORG STRUCTURE IN 2035: Most businesses now have a Chief AI Officer (CAIO or CAI) or a Chief Technology Operations Officer (CTOO).  It’s not common anymore to run IT and Operations as separate functions.  They are more likely to be combined within a single “AI” or “Technology Operations” department, which is often seen as the most important group in the organization.
  • SPEND & REVENUE IN 2035: By 2030, the majority of companies were spending more on AI technology than on human employees in most industries. And around the same time, AI started generating more revenue than human employees. AI is now the largest driver of growth.

  • COMPETITION IN 2035: AI technologies have evened the playing field, favoring small businesses whose capabilities have been enhanced to rival those of larger companies and whose agility allows them to adapt more quickly.

OK – back in 2025 (snaps fingers).  Considering that the future imagined above is even a possibility, there sure are a lot of implications for IT in the intervening years, aren’t there?

AI’S EFFECT ON BUSINESS STRATEGY TODAY

AI has become a game changer in strategic planning because of its ramifications for so many different areas of the business.  Not only will it profoundly reshape both the technical and operational sides of the organization, it will ultimately blur the distinction between them.

  • IT is undergoing a “re-branding” – the increased focus on AI is, by necessity, putting IT front and center in conversations around business strategy.  Today’s IT function needs to be (and should be recognized as) a strategic business partner, an agent of change, and a driver of transformation.  IT can no longer just support strategies, it has to originate strategies.
  • Companies are planning around both AI’s benefits and its threats – including threats to their business model. 
  • IT roadmaps are being shortened – commercial AI offerings are fast to value and have low barriers to entry.  The pace of change will keep accelerating, and as a result it’s harder to plan forward with accuracy.  IT roadmaps are now commonly 12-24 months instead of 3 years, revised more frequently, with more flexibility and less detail the further out they go.
  • Budgets are being adjusted – as we enter 2025, organizations are reducing spend in other core tech areas (such as cloud, security, and IoT) to direct more budget toward AI.
  • Org structures are changing – ideally, someone at the senior leadership level needs to own corporate AI strategy.  Companies need to consider creating a formal CAIO (or similar) role, and consider what it would look like to create an internal “Tech Operations” (or similar) team that merges your IT and Operations functions and has a deep focus on business systems.
  • Job roles are changing (or going away) – beyond using generative AI to augment day-to-day tasks and interact with customers, the concept of the “digital worker” is becoming a real thing. Salesforce is already pitching its Agentforce as “digital labor”.  Today, AI efficiencies in sales, marketing, and customer support are already proving their ROI.  By 2026, audio-based AI agents may outpace text-based AI agents.  In 2025 more companies will need to seriously reckon with AI actually replacing jobs. 

A TOP-DOWN APPROACH TO AI

AI initiatives fall into two main categories: 

  1. Customer-facing: building AI into your products
  2. Internal: building AI into your business operations 

If number 1 applies to your company, you’re already doing it (and that’s not the focus of this article).

Number 2 applies to everyone.  Some companies are already quite far along in their “internal AI” journey, and some are just getting started.  Larger companies have the resources to develop customized solutions, but the good news for smaller companies is that they have plenty of options for “buy” rather than “build”.  The catch?  There’s… a lot to choose from.  Commercial AI tools are popping up everywhere, and they’re increasingly powerful, purpose-built, and easy to use.  Vendors are already beating down your door to sell them to you.  It’s tempting to sign up for everything that sounds good in a rush to be “AI first” and show that you’re “doing AI”.

But why would you approach what may be the biggest transformation in your company’s history in such a piecemeal way?  I think a more deliberate, top-down approach will better prepare you for the future.  The challenge for small businesses is not in finding AI solutions, or even in understanding the technology.  The challenge is in developing a holistic corporate AI strategy and plan so that you can prioritize your company’s most important needs, optimize your budget and resources, avoid duplication and unnecessary thrash, and maximize the value that you realize from these solutions.

Your holistic plan would include processes for things like:

  1. Discovery – determining which business processes, pain points, desired capabilities and business differentiators should be prioritized for AI
  2. Developing an AI charter and policy – documenting the organization’s AI goals and expected timelines, approach, governance, and acceptable use
  3. Evaluating and selecting solutions – focusing on solving high-priority problems and creating competitive advantages
  4. Evaluating cost – from all angles: dollars, resources, change, and time
  5. Reassessing the org chart and staffing projections
  6. Educating users
  7. Educating business leaders

…and (8) – last but definitely not least – focusing on your company’s data, which we’ll cover next.

DATA DATA DATA!

OK, this part is less exciting, but the most important thing about AI might not be the technology itself.  All value derived from AI starts with the underlying data!  The “garbage in garbage out” rule applies here. 

Every company is sitting on a mountain of data, collected over years, which, if leveraged to its fullest potential, could provide a competitive advantage and differentiation in their market.

At its core, what does AI do?  It takes mountains of raw data and turns it into refined, accurate deliverables in specified formats.  If you want to pave the way toward a bold future where AI is producing valuable deliverables specific to your business (hint: you do), then your business data has to be clean, standardized, and complete.

So while you’re evaluating which business processes and capabilities you want to prioritize for AI projects, at the same time you’ll need to be looking at the underlying data that enables those processes and capabilities.  Often this data will be spread across multiple systems.  For example:

  • Customer service initiatives may require data from CRM, tech support, accounting, and contract management systems
  • Data analysis initiatives may require data from purchasing, inventory, laboratory, and manufacturing systems
  • Research initiatives may require data from emails, internal documents, policies, white papers, case studies, position papers

TIP for 2025: Understand “RAG” AI (Retrieval Augmented Generation) – this is where an SLM or LLM (small or large language model) is used to generate the core response but additional data is also brought in from external systems to augment the response.

To be clear, data hygiene has long been a major business concern.  The goal of maintaining consistent, accurate, clean data – and standardizing it across platforms and eliminating data silos – is nothing new.  But AI is bringing this concern to the forefront, and many companies are finding they need to undertake data cleanup efforts before, or alongside, their AI initiatives.  

The reward?  Commercial AI platforms are enabling processing and integration of large, diverse data sets in a way that small companies could never achieve before.  This is lowering the barrier of entry to “big data” style analysis that large companies have been leveraging for years.

SECURITY

As you can probably imagine, AI presents cybersecurity with a very sharp double-edged sword.  It’s a powerful tool for both the bad guys and the good guys.

  • Bad guys: AI enhancements are already resulting in breaches that are more frequent, more efficient, easier to carry out, and on average more expensive for victims.  AI-facilitated ransomware is a major threat (and is currently a favorite tactic for political retaliation by nation-states).  AI has made phishing and other scams, like deep fakes, increasingly sophisticated and targeted.  And new types of attacks have emerged, like prompt injection attacks that can extract confidential information from generative AI systems.
  • Good guys: AI is able to enhance almost every aspect of threat defense.  It can improve detection capabilities for intrusions, phishing, and malware; it can enable more accurate threat assessments, more refined alerting, and better/faster automated responses.

So, while it’s true that cybersecurity is much more automated than it used to be, so is cybercrime.  Although many companies are watching their spending right now, business leaders need to be aware that cybersecurity is on the front lines of the AI “battle”.  It is more complex and arguably more critical than ever.  Therefore, security is not an area where it’s advisable to replace employees with AI anytime soon.  Yes, AI will improve efficiencies of security teams, but skilled humans are still required to stay abreast of threats, configure the tools, understand the information and responses they generate, take appropriate actions, and communicate with management and clients.

Business leaders should also be aware that there’s a huge cybersecurity talent shortage right now.  So the advice here is, take care of your existing in-house security talent (continue to hire, retain, and upskill), and be aware that mitigation of cyber threats is always multi-faceted: it includes not just AI-powered detection and response tools but also proper access controls, fraud controls, and user training.

NOTE: Of particular concern right now is the public sector, because of its volume of sensitive data, attractiveness to foreign actors, and the fact that the public sector has been slow to adopt AI practices.

CONCLUSION

AI is here, and it is the future.  Have a plan.  Build your AI chops aggressively within your plan.  Use it to solve business problems – evaluate the suitability of available AI solutions for your key processes, pain points, desired capabilities, and competitive differentiators.  Build custom AI solutions if you have the resources to do that, but home-grown solutions will be increasingly unnecessary for small companies.  A certain degree of failure is OK in the early stages – you are building your organizational AI muscles.  Train your staff.  Train your business leaders.  Hire with an eye to AI skills – a good rule of thumb will be to screen for AI skills or AI literacy when hiring in almost every role.

Again, approach this with the assumption that eventually AI will take over all aspects of how you run your business – assume you will be “under water” in just a few years.  2024 may have been the last year that small companies could be tentatively sticking their toe in and splashing around.  By 2025 you really need to have learned how to swim.

And finally, this is the part where I let you know I can help. If the above seems big, scary, or complicated, I can help your business assess its AI needs, put these principles into practice, and chart an AI journey that sets you up for the long haul.