web analytics

Gartner Blog Network

  • Orquestrando "Fusion Teams"
    by Luis Mangi on March 29, 2023 at 9:11 am

      Tem alguma coisa mais alusiva a controle do que essa imagem? Precisamos falar sobre controle, sobre governança para ser mais preciso. Faz algum tempo, o Gartner tem falado bastante dos "Fusion Teams": times multidisciplinares que combinam especialistas das áreas de negócio e especialistas de TI para entregar soluções mais rapidamente. Apenas recapitulando, a grande diferença dos "Fusion Teams" em relação a outras gerações de times multidisciplinares que sempre existiram nas organizações é o fato dos times serem liderados pela área de negócio. O negócio planeja, constrói, entrega e mantem as soluções criadas pelos "Fusion Teams", e essa responsabilidade inclui atingir metas de resultado e metas de tecnologia também. Segurança, escalabilidade, qualidade, alinhamento arquitetural, são, sim, responsabilidade dos "Fusion Teams". É fato também que os especialistas de TI nos "Fusion Teams" não são mais exclusivamente alocados pela organização de TI. Muitas áreas de negócio já vem contratando profissionais de TI (colaboradores ou prestadores de serviço terceirizados) no mercado. Muitos clientes no Brasil já operam com esse modelo de entrega distribuído, utilizando rótulos variados, mas com essa mesma missão: entregar o mais rápido possível, ser flexível e ajustar rapidamente ao contexto, construir soluções o mais perto possível de onde elas serão consumidas. Digamos, então, que trabalhar com "Fusion Teams" não é mais uma grande novidade. A questão agora é como sair dos pilotos iniciais e escalar esse modelo de entrega, levar a todas as áras de negócio, multiplicar a quantidade de times. Como assegurar que as soluções entregues pelos "Fusion Teams" estão atendendo a padrões de arquitetura, segurança e compliance? Essa foi a questão mais recorrente nas nossas interações com CIOs neste mês. Vale lembrar que esses riscos sempre estiveram (e sempre estarão) presentes. Por muito tempo, inclusive, foi a justificativa que dávamos para operar com uma TI centralizada, e para condenar o que muitos ainda se referem como "Shadow IT": aquelas pessoas que desenvolviam (desenvolvem) soluções nos departamentos sem a anuência oficial de TI.   Nem tanto ao céu, nem tanto a terra.   Primeiramente, é preciso desmistificar essa ideia de times autônomos. Já falei sobre isso no primeiro post que fiz aqui no blog (vide "A ascensão e queda do modelo Spotify"). Não existe autonomia total, mas também não podemos insistir num modelo onde tudo é absolutamente controlado e construído pela TI central. Vamos à figura abaixo:     No lado esquerdo, a "autonomia total" nos expõe perigosamente a alguns riscos, a saber: Desalinhamento de prioridades na empresa, redundâncias, descontrole de gastos. Inconsistências importantes na forma como a experiência do cliente é entregue, seja cliente final ou cliente interno. Questões de compliance, privacidade (lembram da LGPD?), vulnerabilidades de segurança. No lado oposto, a "ditadura de TI" onde o excesso de comando-controle traz: Uma burocracia insensível às necessidades do negócio. Uma tomada de decisão unilateral e opaca. Controles e padrões desatualizados, que podem não acompanhar a velocidade com que tudo está acontecendo no negócio. Vejam que tanto num extremo quanto no outro, a geração de valor é impactada. Por caminhos e motivos diferentes, a "autonomia total" e a "ditadura de TI" não ajudam a gerar valor.   Como muitas coisas na vida... (meu momento aristotélico), a solução está no equilíbrio, no caminho do meio.   Quando falamos em governança adaptativa estamos indicando uma abordagem estratificada, com diferentes estilos de governança, cada um adaptado aos diferentes contextos que encontramos nas organizações. "O conceito de Governança Adaptativa traz consigo duas ideias fundamentais: adaptação ao contexto (e não abordagens do tipo "one-size-fits-all") e co-criação (e não modelos e regras impostas por TI) A Governança Adaptativa visa entregar: Maior velocidade e agilidade. Soluções arquitetonicamente sólidas, seguras, e escaláveis. Decisões colaborativas e transparentes. Vejam a figura abaixo:     Cada contexto será aplicado conforme a complexidade/criticidade dos processos de negócio suportados e a maturidade dos times envolvidos na execução. Processos "core" muito provavelmente permanecerão sobre gestão centralizada de TI na grande maioria das organizações, no estilo "Control". É o caso de sistemas de ERP, por exemplo. Times que estão evoluindo plataformas de relacionamento com clientes se encaixam bem no estilo "Outcomes". Times que estão construindo novas jornadas de experiência podem eventualmente trabalhar numa camada com um pouco mais de liberdade, no estilo "Agility". Times de inovação, onde existe um alto grau de experimentação, certamente precisarão de um estilo "Autonomous". Times que alcançaram maturidade, medida pela adoção gradual das melhores práticas definidas pela área de TI em cada campo de conhecimento, podem ter mais autonomia. Governança adaptativa funciona com base em métricas que devem ser definidas nas quatro dimensões de performance de times ágeis: velocidade, valor, qualidade e efetivividade organizacional. Peter Drucker sempre dizia: quem não mede, não gerencia.     Governança adaptativa funciona com controles permanentes e controles temporários. São os indicadores em cada dimensão de performance que sinalizarão a necessidade de ativar ou desativar controles. Ou seja, "Fusion Teams" trabalhando numa mesma "tribo" podem estar sob controles diferentes em função do status de seus indicadores de performance. Indicadores indo bem, tudo certo, você tem maturidade para trabalhar com mais liberdade. Indicadores com tendência preocupante, sinto muito, você vai ser mais controlado e vai, sim, perder velocidade nas suas entregas. Com grandes poderes vêm grandes responsabilidades. Os estilos serão descritos de forma diferente em cada organização por um motivo óbvio: não existem organizações iguais. O que é comum são as definições que precisam ser feitas, ou seja, o que está em jogo? Quais decisões poderão ser tomadas em cada contexto? Quais os limites de atuação dos diferentes times (TI central e "Fusion Teams")? Quando vocês começarem a identificar os diferentes contextos na sua organização e associar aos diferentes estilos de governança, tenham em mente os seguintes temas: Decisões de investimento: qual a alçada do time? até quanto cada time pode gastar? quais instâncias deverão ser acionadas caso o time não tenha autonomia de decisão? qual é o processo de aprovação que deve ser seguido? Risco: que tipos de risco, que tamanho de risco, o time pode decidir? qual o "apetite a risco" permitido? Contratação: os time podem fazer contratações de recursos e/ou serviços? quais? em quais situações? o que devem fazer caso não tenham autonomia para contratar? Seleção de tecnologia: os times podem decidir sobre qual tecnologia usar? em quais situações? podem fazer POCs? como proceder caso apareça uma necessidade não coberta pelas tecnologias adotadas na organização? Alinhamento arquitetural: os times têm a liberdade de criar arranjos arquiteturais que não estejam alinhados com as arquiteturas de referênca definidas pela TI? quais são os limites dessa liberdade? Fronteiras funcionais: que tipo de funcionalidades podem ser criadas pelos times? qual o limite entre o que deve ser feito pela TI central e o que pode/deve ser feito pelos times distribuídos? como proceder caso apareça uma necessidade que envolve funcionalidades que são suportadas/mantidas pela TI central?   Faz sentido pra vocês? Comentem, perguntem, participem.   Abraços a todos, Mangi.      

  • Getting Hybrid Right (and the Impact of Getting it Wrong)
    by Tori Paulman on March 29, 2023 at 3:56 am

    I've just finished my world tour of my presentation "How to Get Hybrid Right, and the Impact of Getting it Wrong". From Phoenix AZ, London UK, Hollywood FL, and today Amsterdam NL and I'm full to the brim with so many stories from CIO Leadership Forums. Today the World is at the Peak of Inflated Expectations for Hybrid Work We believe the tools that enabled us to be successfully remote will deliver hybrid success, we have an environment (the office) that should meet our needs, and we have a vision for that success looks like. We've been sitting for so long at the peak of inflated expectations, we're just relieved to be moving again. Remember back to July of 2021? We expected to be back in the office on September. But the Delta variant pushed us back. And then again, in November of 2021 we expected to be back on January. But Omicron quickly reset those expectations. Tomorrow We Will Be Surfing the Trough of Disillusionment You know what they say... what goes up, must come down. Clients as us all the time, "what are other organizations doing? In my industry, in my country, in my division...?" But we forget that the trough of disillusionment is where best practices are forged. But we can learn from others to speed up our own process. In my session I highlighted two best practices I've seen emerge: Accountability With Autonomy: Schroders Employee-Driven Flexible Working Decisions Case Study highlights how an organization can link employee autonomy with accountability by creating a principle design that includes harmony between what the employee wants / needs, what the team needs to be successful, and finally what the overall organizations needs to deliver on for customers or citizens. Making Mandates Make Sense: How do you make sense of all this and calm the chaos of individual choice? You need to come up with a pattern. The biggest challenge is that the most common pattern we see is specific number of days / week or specific days. It’s easy to see why this is unpopular, it also defies logic. Why would Tuesday, Wednesday, and Thursday be better than Monday or Friday for being together. A more intentional approach uses work rituals to define together days. Take an agile team for example… Every other Monday we come together to make a plan for our work. (Science shows we are more creative when we have tactile experiences, like white boards or sticky notes.) Every other Thursday we show off our work. (It's so much more fun to high-five in person rather than over video.) Every other Friday we talk about what didn't work well. (Constructive feedback as far better experience for everyone face-to-face. No more keyboard warriors.) Then we layer in autonomy around the anchor days. Getting Hybrid Right Will Require an Application Layer  The applications that made us successful working remotely will not necessarily help us make hybrid successful. Vendors from a diverse set of markets such are creating application capabilities that will make hybrid work easier for organizations, leaders, and employees. Supporting visit planning and coordination between colleagues and managers and their teams, providing user interfaces that work on-the-go as well as embed in collaboration apps where workers spend most of their time. Emerging capabilities are being developed to make coming to the office easier, nudge and remind employees, and advertise amenities and events to employees to motivate them to come in. At Gartner our guiding principle is to provide expert guidance and tools to help our clients execute on their mission critical priorities. So it's not an understatement to say this feedback I received after my session in Hollywood Florida made my year: "Just before I came to this conference I sent an email to my leadership team that we were going to institute a mandate, but after attending this session, I'm going back to the drawing board on Monday morning with new data and insights.   Gartner clients and click the image below to see why hybrid does not equal flexibility.

  • Choosing Through Change - Expect the Unexpected!
    by Daniel Betts on March 28, 2023 at 9:45 am

    Choosing through Change is finding the confidence to trust in yourself and find the space to see opportunity within adversity, to live in the calm of the storm! - Gill Hicks (Founding Director of M.A.D. Minds) will be delivering a very personal and inspiring Guest Keynote on Monday 15th May in Sydney as part of Gartner's IT Infrastructure, Operations and Cloud Strategies Conference (IOCS) ANZ   Gill will share her unique toolkit of tips and practical strategies on embracing change and challenges that are both scary and exciting. Rather than build layers of resilience to unwanted change, through the lived experience of facing the daily unknown, the audience will hear how to appreciate the one thing we can control when everything feels out of control—our thoughts! Make sure you don't miss out, and register now to hear not only from Gill, but to be a part of the must attend Gartner IOCS event in Sydney this May!!

  • What’s Beyond Supplier Diversity for Procurement Leaders?
    by Andrea Greenwald on March 28, 2023 at 9:00 am

    Let’s say we all agree that we want to make the world more equitable and inclusive. And we, as sourcing and procurement leaders, the people who are on the hook for our organizational spending, feel like we have a role to play in achieving this goal. Currently, the lever we most often pull is creating and managing a supplier diversity program. A diverse organization is defined as a company that is 51% owned and operated by a historically underrepresented or underserved group. Our reasoning for these programs can span from “It’s the right thing to do” to “We get business value” and “It’s required by law.” But when you start to do the math, it’s just a small slice of the impact that sourcing and procurement leaders can make. Let’s break it down. On average, supplier diversity is 5% of company addressable spend, with a good program (excluding small business) expanding beyond 11% of addressable spend. Ballpark, organizational spend can be calculated with the pareto principal, with 80% of spend being with the top 20% of suppliers, and the bottom 20% of spend fragmented across 80% of suppliers. For the sake of simplicity, let’s say all 5% of diverse spend is all strategic (shown in the figure below). Conservative estimate: that leaves 75% of our strategic spend, with the 19% of our supply base that is highest paid with no requirements for diversity. The people who know supplier diversity will say, “Andrea, that’s why we have tier 2 spending!” I hear you. But if we think about the influence of one sourcing and procurement leader’s spend, we are just doing the same calculation with even smaller percentages. How can companies make a bigger impact? It comes down to influencing the workforce diversity of your largest-spend suppliers. Helping High-Spend Suppliers to be More Diverse If sourcing and procurement leaders influence their high-spend suppliers to hire and promote more diversity within their companies, then, as just one function, we are creating an ecosystem of equity and inclusion and creating more opportunities for people who normally wouldn’t have had them. But where do you start? First, evaluate the DEI maturity of your own organization. Gather data on your workforce diversity representation and practices in place to create opportunities for underrepresented talent, to drive community engagement and to encourage stakeholder commitment (see the figure below). Why? Knowing our practices allows us to understand what we can ask of our suppliers. Second, assess the DEI maturity of your high-spend suppliers. For incoming suppliers, request details on the potential suppliers’ posture towards DEI leveraging a similar questionnaire used in your self-assessment (see figure above). For current suppliers, it will take a strong relationship and trust for them to divulge this information, but there are different types of tools available that range in capability to help automate the process, such as EcoVadis, SupplyHive, Privva and SupplyShift. Be transparent with suppliers on how you plan to use the information. Why? You can’t set goals or track progress unless you know where you are starting from. Third, analyze the collected information and create a strategy of where to concentrate your time and on what areas. For focus, prioritize the suppliers who are immature and willing to grow. Where can your company play a role in their ambitions? For areas, brainstorm the unique capabilities your organization has to offer your supply base. It’s also OK to go on a journey together. To get started, sourcing and procurement leaders may start with updating their supplier code of conduct to include information on DEI such as Johnson & Johnson, BlackRock and Boeing. From there, they can identify the issues they want to focus on (such as inclusivity in recruiting, learning and development, benefits, etc.). Be sure to track progress to ensure the longevity of the program. Why? Because this is the role that procurement can play in advancing DEI. Thank you sourcing and procurement leaders for staying on this DEI journey with me. Andrea Greenwald Sr Director, Advisory Gartner Supply Chain Andrea.Greenwald@gartner.com   Register for upcoming webinar: Build a Supply Chain That Thrives in a Low-carbon Economy   Listen and subscribe to the Gartner Supply Chain Podcast on Gartner.com, Apple Podcasts, Spotify and Google Podcasts

  • The Technology Challenge
    by Hank Barnes on March 28, 2023 at 1:15 am

    As we deepened our coverage of business buyers of technology recently, we encountered yet more evidence of technology related challenges facing these buyers and, by extension, the companies trying to sell to them.  (Side Note:  For clients, check out our resource center for a quick way to explore all of the research in this area or to filter it on functional areas of interest (Sales, Marketing, Customer Service, Finance, HR, or Supply Chain). We asked over 3000 people working in the departments above what their top two departmental business objectives were.  We then asked the biggest challenge or obstacle they faced in achieving the top objective.   The top 4 issues are quite revealing: The number one and number 4 choices are both technology related!.   For nearly every department, technology is so intertwined into the way we work that many can't see how to get work done without it.   But even as we rely on it more, our confidence in it--or better stated our confidence in understanding it--remains low.  (As a note, for clients, we dive into the full list, including breaking it down by each of the departments in this research note.) The number two challenge was the one we expected the most.  We consistently see talent issues being top of mind aross the organization. But let's add number 3 to the pile.   Disagreements within the team.   Surprising? No.   Alarming?  Yes. All of this just reinforces our research on regret and high-quality deals (or the lack thereof).   The biggest driver of regret is disagreements within the team on objectives.   The majority of buyers feel their expectations have not been meet for big technology purchases (more on that next week for these departmental buyers).   Add a backdrop of lack of confidence in their ability to choose the right tech or implement it correctly and we can see how decisions may drag on and on or how growth after a win can not go smoothly. Those looking closely might say the %'s are low.  And they are---there were 12 options to choose from.  Only 2% of respondents said they saw no major obstacles.  But even with that spread,  3 of the top 4 just reinforce all of our research and viewed through that light, we can see how there issues are on the minds of many customers, even the ones that did not choose them as their number one choice. For vendors, the implications are clear.   Make it your aim to help your customers build confidence and consensus.  Confidence in the approach to selection, so they are confident they are choosing the right technology (Let's face it, all tech works at some level, if you help them be more confident about themselves, that good will will rub off on you and you will win your fair (and likely bigger) share.  Help them build confidence in their ability to achieve value.   Help them build confidence in gaining high adoption rates.   And help them build consensus (see my recent post). Tech is part of everything we do.   It's time to work together to reduce the fear and uncertainty around it.  

  • Synthetic fuels in EU: why they're great news for Elon
    by Pedro Pacheco on March 27, 2023 at 11:51 am

    The impending weaving of synthetic fuels into EU’s ICE ban policy is a sign of relief for a part of the European automotive establishment. The thought of having to fully transition towards electrification caused shivers to many in this sector, but synthetic fuels can now prolong ICEs indefinitely. Old school sees this as a victory for ICE and a sign that BEVs won’t take over the market as they require a considerable change of habits for consumers. All that seems good and well but only for those that find comfort in that vision. In reality, things will be a lot more complex than that.Synthetic fuels come out of green electricity – exactly the same stuff that powers BEVs. The big difference is that the well-to-wheel efficiency of ICEs on synthetic fuels is 82% lower than that of BEVs. That means ICEs will need 82% more green electricity than BEVs in order to cover the same distance. And that, as you’d imagine, will make BEVs a lot cheaper to run. Curiously, hydrogen has exactly the same problem - massively electricity-thirsty - but not has bad as synthetic fuels. Some would say “but an ICE is much more convenient to use, so people would pay the difference”. Now just imagine the €2/litre you pay today for diesel/petrol goes up to more than triple of that? It might make many drivers to think twice about their ICE...Economics can make or break powertrain adoption - Henry Ford could tell a lot about that. EVs were actually doing great beginning of last century: they were a lot easier to drive than a petrol car (much different from ICEs you know today) and were not as dirty, smelly and noisy. Hence, more convenient and easier to accept. However, Henry Ford puts an ICE on the market at a fraction of the price – and down went EVs. By 2035 we will see the opposite movie: synthetic fuels will be great for the super rich, but not for the average worker. Also, adding synthetic fuels to the equation makes everything more complex: it will be extremely naïve to think BEVs, hydrogen and synfuels can all conquer a major share of the market. You just need to look at history (the example of Ford Model T or the growth of diesel in Europe): the choice of fuel is mostly dictated by regulation and economic factors. EU regulation currently puts the three options at the same level, so it's economics that will decide who wins or loses. And I'll not even mention how many cities will be keen to close urban centers to ICEs - synfuels make no difference.And it gets even worse. Synthetic fuels will bring a false sense of security to those automakers and suppliers who don’t really like BEVs. Consequently, they’ll never put major emphasis on BEV development, as carrying on with ICEs is just easier. This is a trap and a deadly mistake. BEVs will command the market in China and in US Tesla will drive BEV adoption beyond CAFE regulation demands. Already today two US states - California and Maryland - have committed to do what EU has refused: to abolish ICE by 2035. And likely more will follow.Hence, European automakers not heavily invested on BEVs will have difficulties fighting in these markets - even more than what they already have today. On top of that, they will still have to deal with strong EV adoption in Europe anyhow.So, Elon Musk and other auto leaders with future vision actually have reasons to be happy: they can keep focusing in the future, confident that more legacy competitors will still be living in the past.

  • The FTC's Proposed “Click to Cancel” Could Help or Hurt Your Brand. Which Will It Be?
    by Augie Ray on March 27, 2023 at 10:18 am

    My peer, Ben Bloom, raised a provocative question: Will the proposed Federal Trade Commission “click to cancel” rule help or hurt brands? Like virtually everything in business, I believe the answer depends on a brand's customer-centric culture, the customer understanding it has or collects, and its commitment to #CustomerExperience. It's clear how this proposal could hurt some brands, but smart brands can beat (and should already be beating) the FTC to the punch with a winning #CX. If you're not familiar with the proposed “click to cancel” rule, it is a simple idea that all of us will applaud as consumers: It would require businesses to make it at least as easy to cancel a subscription as it was to start it. For example, if you can sign up online, you must be able to cancel on the same website in the same number of steps. The fact “click to cancel” must be forced upon companies is a sign of how difficult it is to achieve customer-centric decisions in a company, particularly at the moment of customer abandonment. For a new or loyal customers, the advantages of experiences that encourage satisfaction and loyalty are evident, but what's the value of making it easy for customers to depart? The business benefit of keeping customers is immediately evident (revenue!) while the dangers of poor CX at the moment of churn are less so (damaged reputation and a reduced chance to recapture the customer). You can almost hear brand leaders thinking, “Well, if cancelers are pissed off to begin with, then what's the danger of more frustration?” But, let's face it--this should already be a no-brainer for brands. You and I both know this policy is a painfully obvious idea in the customer side of our brain, yet business leaders will likely fight this proposal rule tooth and nail. Any resources and effort your organization may be inclined to dedicate to lobbying against the rule really ought to go into making the rule superfluous for your brand. Brands can win with a thoughtful, customer-centric approach. Today, it's too easy to tell customers they have to call your company to cancel, placing the retention burden on call center employees. That not only frustrates people and risks brand damage, it is also a terrible, frustrating employee experience as well. But if customers must be able to cancel as easily as they purchased or subscribed, smart brands will: Listen more and resolve drivers of dissatisfaction and churn. If some brands face an impending cancelation Armageddon, the first and most obvious course should be to minimize and eliminate the reasons customers want to leave. Many companies do a lousy job of listening to customer feedback and investing to resolve causes of customer friction. Often, this is because the economic value of doing so isn't evident, but this new FTC policy tips the cost/benefit equation and makes the financial benefits more evident. Identify and react to customer warning signals. The prior suggestion was about understanding and fixing the top aggregate reasons for churn; this idea is more about identifying and proactively reacting to customers most at risk of churning. Some customers will abandon brands unexpectedly, but for most, there will be signals: Reduced usage or frequency, declining engagement, more customer care interactions, and dissatisfied survey responses. Brands should be monitoring and proactively responding to these abandonment signals rather than waiting for the cancelation request. An offer of a free month to someone threatening to end a subscription can seem a desperate, too-little-too-late brand ploy, while a surprise offer of a free month to an existing (albeit declining) customer can be a loyalty-building delight. Consider customer personas to improve the overall experience and offer the most powerful retention offers. Brands tend to rely on the negotiating skills of employees to prevent abandonment, turning each instance into a one-on-one negotiation. “Click to cancel” will force brands to automate this process, which means they need to understand each customers' persona. A budget-minded customer struggling to make ends meet will be responsive to an offer of reduced subscription fees, while a more value-oriented customer may be encouraged to stay by adding services that would otherwise require additional payments. Or, think of Netflix: Knowing which fan is into scifi versus romantic comedies can help the brand promote the most desirable upcoming content to try to keep customers. Implement a constant test-and-learn approach to finding the best retention offers. The solution isn't to find the one-and-only best offer; instead, constantly test what effectively retains customers. This should be a ceaseless process because today's answers may not be the same as next year's, depending on changes in the economy, competitive landscape, and your brand's offerings. Finally, earn the right to keep in touch. If the customer is committed to abandoning, then do what you can to improve your future ability to recapture the lost customer. Don't assume you can continue to spam lost customers; instead, ask for permission to keep people on your email list and offer options. For example, a lost customer probably won't want your daily marketing message but might be interested in a once-a-month update on improvements you've made to your product or service. “Click to cancel” could help or hurt a brand. Do the right things to prevent churn, earn loyalty, respond to cancelation requests, and ease the path to continued engagement and recapture, and your brand can snatch victory from the jaws of defeat and minimize abandonment.

  • Autonetics - The Approach to Intelligent Automation Design (Part 2: Learning from Tragedy)
    by Cameron Haight on March 27, 2023 at 4:51 am

    “Automation surprises occur when operators of sophisticated automation, such as pilots of aircraft, hold a mental model of the behavior of the automation that does not reflect the actual behavior of the automation. This leads to increased workload, and reduced efficiency and safety.”  Formal Method for Identifying Two Types of Automation Surprises (Sherry, et., al.) In Part 1 of this series, I took a bit of a history tour to discuss the Cybernetics contribution to automation and computer science in general. In Part 2, I look at how we can and should learn from automation going “sour” in real life. On May 31, 2009 at approximately 7:30 PM local time, Air France Flight 447 took off from Rio de Janeiro to fly to Paris.  Several hours later they flew into an area of severe storms.  Some four minutes or so after entering the turbulent environment, the autopilot (and auto-thrust) disengaged which gave full control of the airplane (called flying in alternate law) to the pilots.  In the next four minutes over 70 cockpit alarms would issue before the aircraft literally stalled into the ocean with the loss of 216 passengers and 12 crew members. The subsequent investigation(s) found that the Pitot tubes had frozen internally due to a buildup of ice crystals (a known possibility but one which was not communicated to the crew) causing the avionic speed indicators to become erroneous which lead the computer-controlled system to hand-off control to the pilots.  From the analysis of the recovered flight data recorder the information presented to the pilots as the transfer of control occurred is believed to be that below: Figure 1. Electronic Centralized Aircraft Monitor (ECAM) messages. And this information was presented within the context of the cockpit as indicated below (see Figure 2). Figure 2. Airbus A-330 203 cockpit and ECAM message display positioning. Keep in mind that as you view this from the perspective of the pilots, the aircraft was in a pitch-black environment likely punctuated by lightning and with turbulence exacerbated by the inputs from the pilots causing the aircraft to roll. In addition, the Flight Director displays (the consoles with the crossbars on either side of the cockpit that provide feedback on the airplane’s path) would flicker on and off and this influenced both of the pilot’s actions that would ultimately cause the airplane to stall.  Now, let’s run through a “checklist” of potential issues that the pilots had to deal with from their perspective as they attempted to control the airplane: System information easily understood? No (Lacking context and not always available) System warnings of a potential condition occurring that would necessitate transfer of control? No System explanation for automation failure and hand-off? No Pilot mental state likely ready to accept transfer of control? No (Likely because of a lack of warning) System alarming designed to reduce cognitive load? No (More than 70 stall and other alarms were issued and these continued until near the very end of the flight) System explanation for alarm shut off? Note: the stall alarms stopped when the computers thought they were in error as it was not thought that an airplane’s velocity could be so slow No In the final report, one of the recommendations was that the “display logic” needed to be reviewed especially in the context of a stall (note: while there were audio alarm warnings, there were also no visual indicators of stall).  That’s an incredible understatement, but as we can see from the above, that was not the only problem.  In essence, the pilots didn’t know what data to believe nor did they clearly understand the situation that they were in. The report also suggests that the failure to effectively respond to the numerous aural warnings could have been because of the heavy cognitive workloads being experienced by the pilots (note: the report actually references some classic automation and human factors papers that we’ll touch on again in a future blog post). In addition, the “Crew Resource Management” or communications between the pilots was ineffective as neither knew that their actions were cancelling each other out (one was trying to cause the airplane to pitch down to gain airspeed while the other was pulling up to gain lift and the result was a cancellation of each input). This lack of knowledge of the other’s actions was because the joysticks in this type of aircraft at the time were not linked so there was no physical feedback of the cancellation scenario playing out. What seems to be clear is that, besides the crew communication dysfunction, the aircraft itself failed to provide the information necessary for the pilots to understand the problem at hand and effect proper remediation responses.  In Part 3, we’ll review how the introduction of increasingly intelligent automation will cause even more challenges to occur.

  • "The more you read, the more things you will know. The more that you learn..."
    by Doug Bushée on March 27, 2023 at 4:15 am

    "....the more places you'll go." Many of you will recognize that quote from Dr. Seuss - and those that don't, you missed out on some excellent childhood reading. However, I also think you'll agree that the passage, while directed at a young audience, applies to anyone at any age. In fact, questions about "everboarding," "talent upskilling," or "skills needed by my commercial teams to be successful in the future" rank in the top three for my call topics with sales and sales enablement leaders.During these calls, we often talk about something I see more and more of; a refocus on everboarding. Like many organizations' structured plans for onboarding, everboarding is a structured plan that helps existing sales talent acquire new knowledge and skills. Identifying Everboarding Focus But getting everboarding right isn't easy. Sales and sales enablement leaders have to ask questions such as "What skills do my (insert commercial role here) need to be successful today and in the future?" and "Where are the biggest skill gaps I have on today's more digital and data-driven selling journey?" It also requires sales and sales enablement leaders to make everboarding effective. Should additional training opportunities be voluntary? Do I have people submit an application to be considered for an everboarding program? Will we mandate the learning? Answering these questions takes time and often a lot of cross-functional collaboration. Delivering Effective Everboarding Finally, no matter what the everboarding process looks like, it must include a blended delivery approach.  Effective everboarding requires asynchronous learning (possibly using existing asynchronous content from a higher-ed institution or online learning platform). It also requires synchronous learning, including discussions with internal SMEs and sales enablement team members. Finally, it requires opportunities to apply the learning, either in their current role or in a co-op-type arrangement in a position that heavily leverages the new knowledge and skills learned. Getting Value from Everboarding While building effective everboarding programs takes time, effort, and a commitment to the success of your current and future sales talent. It's worth it. A recent study at Gartner showed that lack of professional development was one of the top categories demotivating sellers. Demotivated sellers are more likely to be actively looking for work and less likely to hit their numbers. And in an environment where it's taking 60+ days to find talent and hitting growth numbers is much more challenging when you have open territories, isn't it better to have a deliberate strategy to upskill your current talent? Effective everboarding programs help retain your sales talent, help build new skills within your organization and help attract new talent. And, of course, the more your sellers know, the more places your sales organization will go.

  • “I’ve Heard of Information Overload, But What Is Enterprise Attention Management?”
    by Craig Roth on March 27, 2023 at 1:43 am

    If you’re like me, Monday morning is your time to catch up on all the emails and messages that came in over the weekend (or were leftover from last week). After a bit of catchup you decide what is really worth attending to this week. In fact, Monday should be declared the “attention management Day of the Week”. So what better day to introduce the concept of “enterprise attention management” to those who haven’t heard it or want a refresher. "Enterprise attention management" refers to the application of attention management principles and practices in the context of an organization or business. It recognizes that an understanding of what is deserving of workers’ attention (and, just as importantly, what is not) requires an enterprisewide approach. More formally, I define it as follows: Enterprise Attention Management is a discipline that helps an organization’s workers respond faster and make better decisions by promoting or demoting information based on importance. Enterprise attention management takes a systemic and holistic approach, considering the organizational culture, work environment, communication channels, technology, and individual needs and preferences. It does not put all the change management burden on workers, but rather involves collaboration among IT and business leaders to identify and address attention-related challenges and opportunities to help all workers in an organization guide their attention to important information. That is why we sometimes call the mechanisms to do this “guided attention”. Technology providers - including hardware vendors, software vendors, and service providers - play an important role in devising products that enable a richer set of attentional capabilities. With a coordinated response, enterprise attention management can create a work environment that supports and enables employees to use their attention and cognitive resources in the most optimal and fulfilling way possible. How Does Enterprise Attention Management Differ From Information Overload? Enterprise attention management is a fundamentally different response to information abundance than “information overload”. Information overload is a narrative that refers to the feeling of being overwhelmed or stressed by the volume of information that we are exposed to, such as emails, messages, notifications, dashboards, and news. Since “overload” is a personal feeling, information overload is usually seen as a personal issue. Since “overload” means “too much”, its solutions tend to focus on individual techniques to “turn down” and restrict the flow of information, such as personal discipline, time management strategies, information dieting and digital detoxes. In contrast, enterprise attention management takes a neutral and more systemic approach to the issue of attention in the workplace. It recognizes that information abundance has good and bad aspects. By systemic I mean an approach that is embedded in work processes and applied consistently rather than an ad hoc approach that may differ for each person and each time a situation arises. So a “management” mindset of maximizing the good and minimizing the bad is more constructive than an “overload” mindset. Enterprise attention management recognizes that long term, enterprisewide improvements cannot rely only on individual initiative. Rather, a poor attentional environment is a collective and organizational issue that requires coordinated efforts and interventions from different stakeholders, including employees, managers, IT, and technology providers. I recently experienced an example of this approach first hand. Our IT department decided to respond to complaints of email overload by examining the types of emails being sent and received. Rather than use overload solutions, such as advice to workers on setting aside time each day to handle these emails or how to set email filters and use reusable text components, IT took ownership of the problem and provided a solution. It found a systemic component: a large chunk of emails were about scheduling a certain kind of call. They enlisted the business unit in charge of the process as a stakeholder. Then they worked together to supply a systemic response, in the form of a custom system that keeps a significant proportion of important communications from hitting emails inboxes and handles this process more effectively than could be done in email. The resulting system is not perfect - no system ever is. But it is a good example of where an enterprise attention management approach generated a holistic and systemic outcome that would not have arisen from an overload mindset. Information overload cannot be “cured.” But introducing an enterprise attention management approach can demonstrate how looking differently at seemingly intractable and inevitable problems can produce real and long-lasting results.

  • Supply Chain Top 25 Field Report: Digital and Talent Trends
    by Stan Aronow on March 24, 2023 at 9:31 am

    Over the past two months, I’ve had the privilege of joining briefing calls with nearly two dozen supply chain organizations as part of the Supply Chain Top 25 analyst education process. Having run this research program for several years, I’ve found no better way to quickly immerse oneself in the latest and greatest challenges, innovations and trends of the most advanced supply chains in the world. What are the common threads running through these companies’ updates? There are too many to cover here, so I will focus on the ever-popular intersection between technology and people.  At the highest level, the solutions shared were intelligent, connected and integrated across functions and partners. The people, coming out of the most disruptive period of our careers, were upskilled, empowered and frankly exhausted. Differentiated Investments One place to start in trend spotting is to follow the money and compare investment strategies between those making the global Supply Chain Top 25 and Masters list with those ranked lower on the broader list of companies evaluated. The Top 25 program team is still compiling summary statistics for this year, but a 2022 comparison shows some key differences in technology investments. For more established capabilities, the top group invested significantly more often in sourcing and procurement, manufacturing execution and global trade management solutions. This is not surprising given the level of supply disruptions and constraints everyone faced during the pandemic. What’s unique is that the leaders had greater wherewithal to continue upgrading their product supply and risk management capabilities even amid the worst of the fire fighting. A comparison of the relative investment frequency for emerging technologies shows double-digit percentage differences between Supply Chain Top 25 companies and others in the areas of immersive technologies (AR, VR), blockchain, conversational systems and artifical intelligence (AI). Most advanced supply chains have built the proper process, solution and data foundations required to start experimenting with, and harvesting value from, these new technologies. Connected Solutions and People The term “connected” spanned many of this year’s Supply Chain Top 25 company briefings and it was used in several contexts: connected customers, connected suppliers and partners and connected employees. Connected customers. In B2C, the most advanced consumer products companies have developed “ground truth” in terms of on-shelf availability with retailers through inbound traceability, machine vision, sell-through data and predictive algorithms. In B2B, there is similar tracking of inventory replenishment and consumption with customers. These capabilities allow for more synchronized daily operations and planning. On a more strategic level, all these companies had teams embedded with customers for joint innovation and problem solving. Connected suppliers and partners. Upstream, many of these same supply chains had similar connectivity and integrated planning with their strategic suppliers. In some cases, the forecasting ability of the brand owner was greater than their suppliers and they used collaborative planning tools to recommend supply commits based on projected availability calculations. Many of these leaders also enhanced their ability to proactively identify, mitigate and manage supplier risks. A leading life sciences company created an AI tool that models its end-to-end supply chain. It predicts previously unforeseen supply issues as an early warning and proposes best-possible solutions based on analysis of successful resolution of similar problems in the past. Likewise, a CP leader wove together functional control towers across its entire supply chain to create an ultimate control tower with predictive management and issue resolution. Several companies shared how they are using technology to promote greater sustainability and ethical sourcing through improved visibility and control. Connected employees. The leaders briefing us described various technology solutions that allow employees to maintain awareness of their environments through alerts and performance management, to boost productivity through physical and logical automation and drive innovation and collaboration with others through connected platforms. Many of them consider the machines working alongside and in support of human workers to be an extension of their workforce. Significant investments have been made in automating and instrumenting sensors on factory and warehouse floors over the past few years. Several highlighted the use of automation, digitalization and analytics to drive higher efficiency and product quality in these environments. Many supply chain leaders spoke more broadly about their employee engagement and talent development strategies. This was often in the context of increasing employees’ flexibility on where and when they work. Leaders continued to invest in skill development across the frontline, middle and executive levels of their organizations.  One company trained its entire supply chain executive team on digital technologies through a dedicated university program. It’s important to note that, while these company-analyst briefings typically highlight the best facets of supply chains’ work, all is not perfect in even the most leading companies. Many leaders acknowledged employee fatigue and burnout. In more private discussions, some have noted uneven adoption of new tools, particularly new AI-based planning tools. There is an opportunity for us to digest and integrate the accelerated changes wrought by the last few years. For those interested in learning more lessons from the leaders on the Supply Chain Top 25 for 2023, be sure to attend the dedicated sessions held at Gartner’s Supply Chain Symposia in Orlando in May and Barcelona in June. The ever-popular reveal of this year’s list will be done via online webinar on May 24, 2023.   Stan Aronow VP Distinguished Advisor Gartner Supply Chain Stan.Aronow@gartner.com   Register for upcoming webinar: Build a Supply Chain That Thrives in a Low-carbon Economy   Listen and subscribe to the Gartner Supply Chain Podcast on Gartner.com, Apple Podcasts, Spotify and Google Podcasts

  • ChatGPT plugins will accelerate the advancement of Machine Customers
    by Mark Raskino on March 24, 2023 at 4:50 am

    The pace of generative AI advancement is just dizzying isn't it? This week came news that OpenAI is now offering ChatGPT plugins. These "enable ChatGPT to interact with APIs defined by developers, enhancing ChatGPT's capabilities and allowing it to perform a wide range of actions" says the announcement. OpenAI suggests one of the main benefits of the plugins is to "Perform actions on behalf of the user; e.g., booking a flight, ordering food, etc."   These are activities that human customers currently perform, that can now be delegated to software acting on their behalf. For example one of the early adopters is OpenTable. They suggest that with the ChatGPT plugin a person could ask: "My girlfriend loves Japanese food, what’s something similar that has reservations for this weekend in downtown LA?"  In this situation the human customer is turning to the software as if it was a hotel front desk concierge. The searching and shopping action is being delegated from human to machine. Another example is Expedia. They show here how ChatGPT enables a conversation with the bot software that is almost the same as if you were turning to a professional human travel advisor. From the vendor's point of view (airline, hotel, tour operator) the software is taking more of the search and selection action away from the human. The question then becomes - how does the vendor's marketing and sales effort influence the software? Some of this is already happening faster than Don and I anticipated in our new book - When Machines Become Customers. In the book we explore the implications of this important megatrend, how you should reframe your thinking around it and what actions to take to ensure your company gets ahead of it.

  • ChatGPT allows us to automate faster, but better?
    by Chris Saunderson on March 23, 2023 at 5:32 am

    I am looking at what has been shared about the capabilities of Large Language Models.  I'm just as impressed by what they're capable of delivering as everyone else. But. What this brings to the fore for me is that the opportunities that LLM offers are very much a double-edged sword. While the capability to very rapidly build apps / code / automation is present, there's an underlying assumption (and risk!) that what is produced by a LLM is "good".  How do we know if it's "good"?  Good doesn't equate to functionality alone. "Good" has many facets. Being functional is one of them, but so is security, fragility, explainability, performance, maintainability... you get the idea.  So what does this mean for the automation world? (I have to work that angle in here, because it's where my brain goes). What it means is that the energy once spent on developing automation can now be redirected. Where?  To the activities that make automation maintainable, perform better, explainable, secure. The thing is, LLM don't (directly) help with any of these problems, but what it does do is free up the time and attention we need to apply to solving these problems.  They also offer a way to ask the LLM questions that help with each of these disciplines. What are we going to spend our time on? In my opinion, explainability - the ability to explain not just what the code is doing, but how it is doing it - becomes the most critical skill to build and refine.   Being able to explain what is happening means that you are able to: develop tests that  ensure that the generated code does what it's meant to. make assessments of the security or the behavior of the code and the surrounding ecosystem. In short: if you are looking at LLM as being the way to do less work, you might be in for a rude shock. Yes, you'll be able to outsource the generation of code, but the sustainment of the generated content is going to be where you spend your brain power. 

  • Are You a Marketing Leader Managing Change? Read On!
    by Emmett Fitzpatrick on March 22, 2023 at 7:22 am

    Employees are reporting record levels of change fatigue – on average, they reported experiencing 39 work-related changes annually – meaning executive leaders of all functions need to help them navigate this environment. This is a particularly acute challenge for marketing leaders, who are often tasked not only with engaging their own teams in change, but also helping internal business partners and even external stakeholders respond to disruption. Last year, as part of our Marketing Symposium, I led a breakout session on helping leaders engage their team in this changing work environment. Below are three key takeaways that I shared; I invite you to consider how you can incorporate them into your day-to-day responsibilities of managing your teams. And by the way, this year’s Symposium is being held in Denver in May, so reserve your spot today! Before I get to the takeaways, I urge you to remember one piece of advice when it comes to managing your teams during change: Stop telling your employees to change, and start figuring out what’s getting in their way. With this hint in mind, let’s get to our recommended actions for you. Acknowledge that your employees are likely fatigued This one sounds easy, right? Well, yes, it is! We need to acknowledge – for ourselves – that your employees are likely fatigued, and not be shy about sharing that with your employees. The problem is that oftentimes, leaders from across any organization are focused – rightfully so in many ways – on the changes that they “own,” the things that they work on day in and day out, that they forget that these changes are not the only ones that employees are experiencing at a given time. In short, they have tunnel vision. Marketing leaders need to recognize that they own one piece of the puzzle – an important one to be sure – but in the minds of their employees, their changes might get drowned out, so to speak, by other things that come across their inbox. By probing to understand the depth and breadth of changes that employees might be experiencing before asking them to engage with another major disruption, marketing leaders can consider delaying some announcements, or aligning them with other initiatives to help employees “connect the dots” for implementing the broader suite of changes. Short of that, even when delaying changes might not be realistic, the simple act of acknowledging that employees are dealing with a lot should be the first step of helping employees navigate change. Involve your employees in change (when possible) This in many ways is the hallmark of what people usually think of when talking about leading change. The idea is that when we’re rolling out a change, if we ask employees for their opinion or feedback, then we’ll get them committed to rolling out a change, and then watch it flourish. In reality, that’s not always possible, and even when it is possible, it shouldn’t be the only part of your change strategy. Rather, you should think about balancing how much you ask employees for feedback with the need to actually get things done. Hopefully by now you know that leaders should not be caught dictating major changes without any employee input, in the classic “command and control” communications cascade. On the other side of this continuum, however, it’s just not practical to incorporate every person’s opinion about a new strategy into your decision making. This can slow down the process, and it can also lead to more disengagement if you take the time to involve employees in decisions and inevitably disappoint some because they feel like you didn’t take their advice. So what is the sweet spot for involving employees in change? We call it a co-created strategy, where employees are engaged, but at the right altitude, so to speak, and at the right time. We also want to make sure that the feedback sought from employees is focused not so much on how to formulate the change itself, but more so on what the change means for them, and how they can internalize it for their responsibilities. So, yes, involve employees – and you can be the best judge of when and how much to involve them – but be intentional about what it is you’re asking of them. Identify and address employee barriers to change Regardless of how much control you have had over the changes your employees are experiencing, marketing leaders should incorporate this step into their regular engagement of their teams. You and your fellow leaders should be on the lookout for what is driving your employees to behave in a certain way, and critically, what might be getting in the way of those behaviors that you are trying to encourage. To that end, I encourage you to review our Behavioral Listening Guide, which provides a framework for you to diagnose those barriers. This can be done in a variety of ways, from formal employee pulse surveys or focus groups to informal listening strategies, but the act of probing to identify obstacles, whether real or perceived, that your employees are experiencing, is an essential component to helping you engage your team in change. Once you have identified those barriers, you can develop “interventions” to try and address them. If, for example, you noticed that employees reported a disconnect between what they hear as a priority from leaders and the actual behavior of their colleagues “on the ground,” you can highlight stories from team members who are displaying the desired behaviors to provide social cues that encourage positive engagement. Your Mission My colleagues and I welcome the chance to connect with you to help you apply these principles into your day-to-day leadership responsibilities, and we have a host of case studies, frameworks, and tools to help you do just that. In the meantime, feel free to copy and paste that one piece of advice, print and keep a copy of it on your desk, or do whatever you need to do to remember: Stop telling your employees to change, and start figuring out what’s getting in the way.  

  • Gartner Launches Metaverse Emerging Tech Impact Radar
    by Tuong Nguyen on March 22, 2023 at 2:32 am

    This week, I published Emerging Tech Impact Radar — The Metaverse (full report available to subscribing Gartner clients).   The metaverse is an evolutionary stage in how we interact with the world around us - specifically how the intersection between physical and digital will change the interactions between people, places and things.  While it's not a formal change, it describes a general change that will happen over time as digital experiences become immersive and interactions change due to persistent, decentralized, collaborative and interoperable digital content; which will overshadow the less immersive experiences of the prior era. The metaverse is an example of a combinatorial trend in which a number of individually important, discrete and independently evolving trends and technologies interact with one another to give rise to another trend.  The metaverse is not able to be completely realized today, but will evolve via precursors in a series of overlapping stages.   As this megatrend passes through its first wave of hype, product leaders are tasked with understanding: Which emerging technologies and trends (ETT) are most relevant to them in each phase. When one or more relevant ETT should be adopted. How current offerings and strategic roadmaps fit into the broader metaverse concept. For the metaverse Impact Radar, I've highlighted 21 ETT because these are the ones that will have a very significant cross-pollination effect on the evolution of the metaverse.  I've grouped the ETT into 5 themes.  While these themes represent important aspects of the metaverse, neither the individual ETT highlighted, nor the themes mentioned imply product leaders should focus all investment on a specific theme.  We recommend product leaders to evaluate the individual ETT with regards to their product strategy; which may imply evaluating ETT across the 5 themes we have identified here. Gartner defines the metaverse as an immersive digital environment of independent, yet interconnected networks that will use emerging protocols for communications. It enables persistent, decentralized, collaborative, interoperable digital content that intersects with the physical world’s real-time, spatially indexed and organized content. Here is our Metaverse Impact Radar (click to enlarge).   Content and Context: The metaverse will contain digital content on a scale that eclipses what exists today. Digital representations of all items and locations within the physical world will be anchored and geoposed through a heterogeneous, but shared point cloud as part of a digital layer that overlaps our physical reality. This layer will not only provide physical items with additional content and context, but include items and information not represented in the physical (such as processes for simulations, storytelling and gaming). To achieve this feat, synthetic data will be used to create better models and explore relationships and interactions when direct, observable data is lacking. Meanwhile graph technologies will provide understanding of the relationships between digital and physical people, places and things. Decentralization: Decentralization will enable new possibilities in terms of information distribution, encryption, tokenization and immutability, providing new capabilities in managing digital assets and exposing new commercial possibilities. For example, “minimization of trust” will provide a user with direct ownership of their online identity for personal monetization if desired. Experiences and Interfaces: While there will be a vast range of devices and applications for users to experience the metaverse, the most popular and easy-to-use method will be via immersive technologies. These consist of devices specifically designed for augmented reality (AR), mixed reality (MR) and virtual reality (VR). Additionally, as the world becomes increasingly digitized, interactions in the digital space will not only take place between individuals, but artificial intelligence (AI) avatars that represent aspects of an individual, concept, function, company or group. Computing and Infrastructure: Many aspects and experiences of the metaverse will develop out of a shift in computing toward more distributed and decentralized processing and storage. Technologies (such as edge and spatial computing) will enable new physical-digital content and interactions; while advances in networking developed out of 6G will allow for experiences beyond what is possible today in terms of bandwidth and latency. Sensing: Data captured by a multitude of both stationary and nonstationary sensors will provide deeper insight into environmental context, enabling highly personalized interactions within the metaverse. Technologies such as computer vision, sensor fusion, multifactor biosensing and spatial mapping will not only process and segment static and dynamic elements of an environment, but integrate disparate sources of content in a consistent, accurate and useful way. Recommendations for Product Leaders: Create a metaverse strategy by developing a roadmap that clearly defines which technologies and partnerships are required to deliver value in the emerging, advanced and mature stages of the metaverse. Review and prioritize investment in metaverse-enabling technologies that add user value when combined with the products or services. While technologies such as virtual reality and head-mounted displays (HMDs) are currently seeing strong marketing/hype, use cases remain limited. Invest in exploring new business and monetization models enabled by metaverse-related technologies as the metaverse evolves beyond solving customer challenges today, by focusing on combinatorial technology solutions that can break new ground in value generation or disrupt existing business models.

  • Patch Management is not Vulnerability Management, so stop treating it that way.
    by Chris Saunderson on March 21, 2023 at 12:14 pm

    It feels a bit odd to be talking about patch management in 2023, this is something that we as IT professionals have been working on for decades. One of the most frequent questions I get relates to  improving a patch management strategy and execution. Patch Management is a How, Vulnerability Management is  Why. You can be amazingly efficient at distributing patches, but not change your threat profile an iota. Why? Because patching is just a means of distributing fixes, but isn't enough on its own. What happens when a patch only partially addresses a vulnerability? What happens if it actually enables a more secure protocol, but doesn't disable negotiation down to a lower / earlier protocol version? (yes, that's a TLS reference, well spotted) You have been very efficient at patching, but not addressed whether you are vulnerable. That's the key: orienting your view towards vulnerability changes what you focus on, and it also requires more effort than carte blanche delivery of patches. To that end, taking a more risk-based vulnerability management approach leads to more protection and more efficient use of resources - especially time! Once you have this framework in place, you are able to prioritize what must be addressed first, but also how the vulnerability can be addressed. This improves your patching execution in two dimensions: you are addressing the most vulnerable parts of your infrastructure first, and that you are balancing the return on the effort against the interruption remediating may cause. What I'm not saying. I'm not saying that patch management is not important - it absolutely is. What I am saying is that it's only one of the ways that you can respond to vulnerabilities. The traditional ways of addressing vulnerabilities - patch management, configuration update, software update - are all effective... eventually, mostly due to the time it takes to execute.  Mitigating controls are better in reducing the time to respond, but may not be 100% effective and turn into another thing for you to manage and keep visible. And yes, risk acceptance is the final option for "addressing" a vulnerability. A quick aside on risk acceptance: one of the outcomes of risk-based vulnerability management is that instead of the hackneyed "I accept the risk" (ala Michael Scott declaring bankruptcy), you actually get a reasonable view into the risk that is being reviewed and what the options are for addressing it - and why your business partners may not be able to make that investment. Out of that can be creative ways of mitigating the vulnerability, but also a clear understanding of what is being traded off. This is invaluable, as it both demonstrates the sensitivity of the vulnerability team to the business conditions that they are operating in, and also the opportunity to bring your business partners into the decision making process. What I'm not saying... also. I'm not saying that this makes patch management any easier to do. I am saying that it makes it more clear what Must Be Done First, followed by what Must Be Done Next, all the way down to There's Nothing We Can Do Other Than Replace The Asset. Transparency and involvement in the decision making process are your best levers in making a more effective vulnerability response.

  • You don’t have to be a MAAMA to start adopting Site Reliability Engineering (SRE) Practices.
    by Hassan Ennaciri on March 21, 2023 at 11:01 am

    I have taken many inquiries on Site Reliability Engineering (SRE) in the last couple of years.  I&O leaders are under high pressure to deliver reliable services at an unprecedented pace. They want to consider SRE, but often worry it is an advanced practice that only companies like Google or Microsoft can implement.  My answer is always you don’t have to be a MAAMA ( Meta, Amazon, Apple, Alphabet or Microsoft) to start adopting and benefiting from SRE principals. SRE is about optimizing design, delivery and operations of products and services to meet customer expectations. The principles are broad and complex, but each organization can define their own SRE practice with achievable goals, and a key to success is to start small iterating and continuously improve. To ensure success I&O leaders must dedicate a team of 2 to 3 SRE to start and have them focus on optimizing operations of a few existing products before scaling and broadening the practices. These are the 7 areas of focus to improve operations and present opportunities for considerable improvements for most organizations.   You can deep dive into Gartner 7 Steps to Start and Evolve an SRE Practice: https://www.gartner.com/document/4019056

  • Leverage The Most Recent Stage Of Hyperconvergence Evolution
    by Jeffrey Hewitt on March 21, 2023 at 10:10 am

    Hyperconverged infrastructure software is in wide usage in today's market. These solutions are not static--they continue to evolve.  Initially, hyperconveged solutions, both appliances and software, focused solely on enabling and managing hypervisors and software defined storage.  Some hyperconverged software solutions have expanded their management and control capabilities to encompass the full-stack across and beyond the silos of infrastructure (Figure 1). Some full-stack HCI software vendors have also extended their deployment options to the likes of cloud providers like Amazon, Google and Microsoft.  The intention is to enable a turnkey private or hybrid cloud. The focus in these cloud cases is on tools to enable managing, monitoring, securing, optimizing and governing on-premises, cloud and edge deployments. If you are seeking a broader set of support across the range of these features and functions from a single vendor, then evaluating full-stack hyperconverged infrastructure software providers can lead to solutions that can deliver that solution.  Gartner has just published, Market Guide for Full-Stack Hyperconverged Infrastructure Software, for which I am an author along with my co-authors Philip Dawson, Tony Harvey, and Julia Palmer.   The advice is to use this guide as a way to develop a short list of providers that can offer these features for you.  In this way, you can take advantage of the full-stack capabilities that have become available in the hyperconverged market.

  • The Great Enablement Reinvention
    by delainey kirkwood on March 21, 2023 at 9:14 am

    Sales enablement has reinvented itself time and time again. First emerging as a function focused primarily on sales training, it has evolved to encompass broadly enabling efficient and effective sales execution. And now, the increasing prevalence of multi-channel B2B buying journeys has prompted commercial organizations to revisit the remit of enablement once again. As customers interact with channels and roles owned by different functions, the risk of conflicting messaging and an inconsistent purchase experience increases, threatening revenue. In response, some progressive organizations have expanded their enablement across all client-facing, revenue generating roles. This transformation has a new name - revenue enablement. What is Revenue Enablement? Gartner defines revenue enablement as: bringing traditional, siloed enablement functions together to ensure all customer-facing roles have the technology, content and competencies needed to create a frictionless and consistent customer experience throughout the buying journey. This revenue enablement approach is prevalent. Based on Gartner’s 2022 Chief Officer Strategy Survey, 65% of surveyed heads of sales and senior sales leaders describe their sales enablement function as focusing primarily on impacting multiple client-facing, revenue-generating functions. Revenue enablement allows enablement leaders to reduce internal friction and improve consistent messaging to customers. Buyers who encounter consistent information from supplier sources are 2.89 times more likely to complete a high-quality, low-regret deal. The logic of this approach is borne out by enhanced commercial performance among organizations that have focused on revenue enablement (see Figure 1). Overcoming The Challenges of Revenue Enablement Enabling the sales force can be difficult enough - scaling that enablement support to a range of customer-facing roles might seem insurmountable. This is especially true when you consider the variability in skills, learning needs and activities across all revenue roles. A one-size-fits all enablement approach leaves employees feeling like the support they receive is a poor match for their needs. So how can the enablement organization overcome this variability to deliver tailored support at scale? Learning From VMware When VMware, an IT company based in Palo Alto, transitioned to a revenue enablement model, the remit for the enablement function expanded from supporting traditional sales roles to upskilling employees across all revenue-generating roles. VMware needed to develop an approach that allowed the revenue enablement team to deliver tailored, role-specific content at the point of need, at scale. To start the process, VMware generated heat maps of individuals’ knowledge and skills against their role profile, based on self-assessments validated by managers, to identify individual skill gaps and enablement needs. The finalized heatmaps help the enablement team understand an individual’s skill and knowledge gaps to determine the tailored enablement resources that should be pushed to the individual. Next, VMware realized that it needed to provide enablement content at the time when an opportunity to apply the skill arises. To solve for timing, VMware integrated its LMS+LXP platform into its CRM to create a central Learning Hub for commercial roles. As an employee works on an opportunity in the integrated CRM, they see a curated list of suggested learning content based on three inputs: the specific activity they are in, the workflow process stage they are in and the skills they need to develop as shown by the heat map. VMware had great success with their initiative, they delivered learning resources to more than 16,000 employees across all revenue roles. The platform has had high engagement - on average, 86% of VMware revenue-generating employees complete more than three trainings through Learning Hub within a quarter. Based on the offerings delivered through the Learning Hub, VMware also noted decreased time to close and fewer discount percentages connected to specific training content. Ultimately, VMware connects increased win rates to specific learning modules, tying their enablement efforts directly to revenue. To learn more about VMware’s initiative, read here: Case Study: Tailored Sales and Revenue Enablement at Scale (gartner.com)

  • Be ADEPT at Digital for Supply Chain Acceleration in Life Sciences
    by Stuart Williams on March 21, 2023 at 9:00 am

    Digital transformation in life sciences is difficult. It is technically more challenging to connect systems and processes. And, there are barriers to radically changing ways of working. Life science companies lack the burning platform that drives them to transform. Other industries MUST transform to survive. However, this doesn’t mean that digital investments should be put on hold. On the contrary. There isn’t the threat associated with digital immaturity. There are huge opportunities in deploying foundational digital tools and processes that will enable more sophisticated digital data, analytics and future-proofing the supply chain when the patients’ expectations evolve in the same way customers’ expectations have changed. Be ADEPT at Digital It is a critical enabler to digitization and data science that will utilize the latest technological platforms and processes. Incomplete, inaccurate or obsolete data often slows or forces abandonment of digital transformation for being too difficult for life science companies. This leads CSCOs to chase sporadic use cases that demonstrate digital progress. However, this can obscure transformational change, impact and progress. ADEPT is a simple approach that provides life science companies structural thinking to the digital journey. It starts with the removal of manual, labor-intensive tasks and employs automatic solutions. As expertise builds in automation, digital strategy can be considered. This should focus on how to replace core processes and tasks with new digital processes, enhancing and simplifying the user experience. Once processes have been digitalized, elimination and decommissioning become possible. This is where most stop. There are two critical steps that will make or break the digital journey. To drive adoption of new digital processes and systems, there should be a sharp focus on people and trust. As people start to trust digital innovation, they adopt it. It becomes integral, and they can’t do without it. On the flip side, when trust is not built, people will revert to their previous methods no matter how manual or cumbersome they are. The digital journey is a long one — infinite, you might say. To stay on this journey, people need to evolve and build their capability and expertise through exposure to new technologies and strategies. Don’t Do Digital for the Sake of Digital Digital transformation and maturity need to be carefully considered in life science. In other industries that have faced worldwide disruption, global events and changes to consumer behaviors, digital agility can impact their survival in the infinite game. Life science companies can learn from other industries in how they are simplifying, reducing waste and cost and optimizing their operating model. However, it may not determine their survival or even profitability. But it could impact the patient through short-term teething problems or poor design and adoption. So where and how the supply chain is digitized should be done with care. In life science, digital is an enabler for future innovation, supply chain agility and resilience. A significant investment is required to automate, digitize and eliminate, unlocking improvements in efficiency and cost reduction opportunities. ROI may take time, but … Use Case by Use Case or Enterprise-Wide Struggle. Think Process! Supply chain digitization usually takes on one of two strategies. The first is problem- or failure-based. This is where use cases are identified in any area of the supply chain and a digital fix is developed in the hope that this will scale to multiple locations around the business. It is a fail-fast mentality and can take a long time before the results are evident. The second is an enterprise mindset. Usually coming from the C-suite, this top-down approach is a silver bullet to accelerate digital maturity and will involve enterprise-wide architecture and systems, impacting the whole business. This large-scale approach ticks the digital box, but functionally the user may not see significant advances. Supply chain digital strategies must focus on two key aspects: the process and the users of that process — in other words, user interface and user experience. Digital transformation is not easy and it’s sometimes not obvious why it is needed in the life science supply chain. Although many companies have started on their digital journey, there is often no clear burning platform that drives the need for change. But change is required, and digitalization will unlock a whole host of opportunities for optimization, cost reduction, improved service and efficiency. The key to success is developing a good digital strategy and sticking to it. Define the building blocks and targeted outcomes and manage the stakeholders to support the transformation. Stuart Williams Sr Director Analyst Gartner Supply Chain Stuart.Williams@gartner.com   Register for upcoming webinar: Build a Supply Chain That Thrives in a Low-carbon Economy   Listen and subscribe to the Gartner Supply Chain Podcast on Gartner.com, Apple Podcasts, Spotify and Google Podcasts