When Digital Transformation Polarizes the Organization
Part Four of Four: The Move from Myth to Reality.
By Kurt Jonckheer
Chief Executive Officer
Like a car engine designed for a specific type of vehicle, no streaming platform is capable of handling all use cases. A platform may be a platform, but like an engine, its performance, cost, maintenance, and required engineering expertise are dependent on what the platform is built for in the first place.
The same holds true for off-the-shelf event-driven platforms and proprietary PAAS. These offerings will always run multiple and diverse pipelines according to their inherent levels of performance, security, and support.
Consider some of the new economy giants, like Uber, Airbnb, Netflix, Facebook, and LinkedIn. You will find many overlapping architectures and deployed technology frameworks within these market leaders. Most of them run on hybrid infrastructures, and several are even forced to invent new distributed computing frameworks.
With hardly any exception, all these players run their own architectures, their own use-case-tweaked stacks (yes, plural), and are cloud-native and cloud-agnostic at the application level. What they all have in common, however, are two unwavering components: data and software control.
Use-case differences dictate customized integration paths.
Regardless of whether an architecture is Lambda or Kappa, use-case types will always drive flow patterns, data volumes and specific latency requirements throughout the stacks. All will operate under different enterprise-specific data-governance laws and key security policies which, in turn, necessitate specific integrations.
In addition to standard industry certifications, organizations will also have their own internal and external company policies. As such, buying something off-the-shelf is either impossible or will slingshot the company into a complete vendor lock-in situation.
Additionally, most enterprises don’t move to the cloud in greenfield environments. Unlike new economy players with the luxury of starting from scratch, most companies are forced to deal with decades of legacy environments. Difficult choices must be made on when and where to start a strategic transformation.
This gives rise to an incremental approach with manageable and isolated steps. Attempting to slingshot the process is certain to polarize IT strategies, teams, and data streams, and will ultimately bring any form of innovation and migration to a complete standstill.
Digital transformation mandates new mindsets.
New software-development challenges, new tools and changing environments will require sufficient space for your IT and R&D people to flourish. Failure to establish a solid digital transformation paradigm is likely to lead to a hodge-podge of disparate decisions and vendors. Costs will grow exponentially, and new target architectures will never be reached.
Moving decades of daisy-chained batch upon batch architectures simply cannot cope with the transformation requirements. Integration through old and new JDBC, OBDC, Change Data Capture, data lakes, data warehousing, BI tools, as well as new streaming platforms will be crucial. The same is true for different storage types.
One can only imagine how much a massively bi-directional IoT use case, where assets come and go in a 24×7 ad hoc basis, will put different strains on your stack. Compare that to a massive data ingress use case with needed analytic requirements, or a use case where batch systems need to be connected to drive platform consolidation through intense data wrangling, data transformation and normalization scripts.
Similar technology frameworks and components will be present; however, they will be encapsulated in specific tuning and dimensions, security implementations, scalability, and latency implementations, and surrounded by integrations toward sources and sinks.
Good luck with finding a single off-the-shelf stack that does it all.
Future-proof doesn’t mean fool-proof.
In general, the term “future-proof” refers to the ability of something to continue to be of value in the distant future—the item in question doesn’t become obsolete. In modern English usage, the informal term “fool-proof”, or it’s more derogatory equivalent, “idiot-proof”, describes designs that cannot be misused, either inherently or inadvertently. The implication is that design is easy to work with even by someone without the skills to use it properly.
But in reality, this is a naïve idea at best. Douglas Adams wrote in Mostly Harmless, “a common mistake that people make when trying to design something completely fool-proof is to underestimate the ingenuity of complete fools”.
Unfortunately, I’ve seen “fool-proof” mistakes happen too many times to ignore what Adams has to say. Keeping your future under your control implies, by default, that you must proactively manage and maintain all aspects of your core business before, during and after digital transformation. Relying on “fool-proof” technology or 3rd-party proprietary solutions just doesn’t work.
Protecting your exit strategies, safeguarding data governance, and increasing your organization’s data and software literacy will always require heavy lifting on management’s part as well as the need to work through steep learning curves and what are often difficult changes along the way.
Oftentimes, such a daunting task doesn’t seem worth the investments it requires. But it is. In the mid- to long-term, keeping your value-added differentiators under your control will be fundamental for protecting your market position.
From Polarization to Unification
Digital transformation goes beyond the organization’s IT, CIO, or CTO divisions. In fact, it applies across the board.
Data-driven-societies aren’t just driven by technology. They’re also characterized by patterns of relationships between individuals who share a distinct culture, including the specific cultures of companies themselves. So, whether it’s at the society level or the organizational level itself, a data-driven environment requires at least a minimum of data savviness.
The responsibility for promoting a data-driven culture falls on the shoulders of upper management. So does keeping a data-blasting shift outside of the organization from eviscerating the company of its core competencies. Unless the end game is kept in sight from the very beginning, too many processes outsourced to too many 3rd-party vendors can lead your company—and your business—to a point of no return.
Which is why so many digital transformation projects have failed to deliver their expected outcomes. Instead, budgets are often completely out of whack. Good people leave. Companies lose their historical insights. Projects come to a grinding halt. The entire organization is divided, if not completely torn apart.
None of this needs to happen if all aspects of the digital transformation process are aligned and well-managed—both within and beyond the IT and R&D departments. To achieve the outcomes you want, the entire management team must start by understanding what the future needs to be. The priorities are to:
- Listen to your core people
- Invest in new development and integration
- Don’t rely on a on a flat percentage of OPEX/CAPEX budgets
- Apply cost-cutting where appropriate
- Select specific uses cases to test preliminary launches
- Integrate rollouts with legacy systems
- Continually expose people to the new norm
- Bundle available resources and skillsets together
- Work in short bursts of deliverables
- Demonstrate in-progress results
- Evangelize the message from the top down.
Most importantly, don’t try to stuff five liters of water into a one-liter glass bottle. If you do so, the bottle has no choice but to burst.
Take the time to read the full White Paper comfortably.
Download it in PDF.