Leadership Musings
Why Post This?
This post is for me to personally collect my thoughts on key aspects of how I approach technical leadership so I can more effectively talk about it when asked.
Leadership Style(s)
I periodically refresh my perspective on styles by re-reading Leadership that Gets Results. It's 25 years old at this point, but a lot of the perspective is relevant today. The key thing that remains true is that leadership styles are tools that should be used for the best purpose.
As defined by that framework, my 'default' styles are a blend of the 'authoritative' and 'coaching' styles. I also have some 'affiliative' style sprinkled in. In times of crisis (like operational issues) I can unbox the 'coercive' style, but my preferred approach there is to setup system for teams to react to crises at pace without me, and target mitigating crises altogether.
Specifically: - As a part of strategy development, planning and goal-setting, I wrap that into a vision and push to get the organization on-board with it. - I focus on setting up systems that delegate authority and accountability downward in a consistent way. - I help teams and individuals through feedback and support, coaching them towards optimal output. - I naturally care about people and their wellbeing so I include that, but not at the expense of progress. - I call this 'compassionate directness'.
Setting (and Evolving) Strategy
TLDR;
- Assess the business needs and how to win
- Define the plan to get there and how to measure success
- Roll it out and review
More Detail
-
Understand the business dreams for 5 years, goals for 3 years, and needs for 1 year
- Gather input from business/product stakeholders and the market
- Gather input from technology stakeholders (tech leadership, CISO, legal/governance/compliance)
- Gather input from the engineering team (team pain points, assess maturity)
-
Define (or partner to define) the plan and roadmap to get there, and how to measure progress/success
- Keeping metrics focused on chief business movers and chief constraints, 4-6 total
-
Roll out the strategy and establish a cadence for communication and progress review as well as a caendance for iteration/improvement of the strategy
- Review OKRs with the team regularly (monthly at minimum) to reinforce importance
- Evaluate the OKRs quarterly and iterate if needed
- Evaluate the stategy annually and iterate if needed
- Reward the team for exemplary work
Strategic Change Agents in 2024/2025
- AI, it's benefits and disruptions
- US Administration and Policy changes
- Corporate return-to-work policies
- Continued economic pressure
How to Drive Work
- Develop roadmap prioritized by business strategy, as measured by OKRs
- Establish capacity assignment targets towards roadmap vs. KLO/tech-debt (85/15)
- Make sure the team's work process is setup to capture this data with low overhead (tickets/reports)
- Ensure roadmap-centric tasks are prioritized and reviewed constantly
- Ensure KLO/debt is aligned to development principles (security SLAs, unit tests, automation, etc) 3 Review on a regular basis and tweak (bi-weekly)
Driving to Engineering Maturity
To build and maintain a highly mature technology team, the place I tend to start is with DORA-centric metrics. With respect to establishing strategies for improvement, where to focus is dependent on the specific team issues.
At the foundation, continuous delivery (and deployment) are critical drivers to focus teams on fast customer feedback. If the team isn't doing that, getting them to do that is first order priority, as CD is all about risk amortization.
Here's a chart showing how deploying slowly increases risk we put in front of customers because lots of risk gets realized at once.
Here's how CD amortizes that same risk over time, lowering the impact to customers.
Generally, though, here's where I'd start focusing (assuming CD practices and infra are in-place).
- Change Lead Time
- If lead time is due to latency in requirements, coordinate with product/architecture to focus on the size of the tickets and break down requirements
- If lead time is due to delays in execution in the eng team overall, start with task complexity, but examine with the team the system complexity too.
- If it's a specific individual who is dragging, coordinate with them individually to coach them up or out, perhaps using a framework like SPACE.
- Deployment Frequency
- If the team doesn't follow CD as a practice, discuss and remediate the reasons why (canary measurement)
- If CD is being followed, as above, start with task and system simplification
- Failure Percentage
- Ultimately the root here is most likely centered in lack of good automated testing.
- Unit test coverage and build/deployment failure on misses is table-stakes so is a good place to start, but not sufficent alone for complex systems.
- Canary deployment strategy or other live-routing strategies (blue/green) would be the next place to target coupled with customer-centric metrics for assessment/promition (vs basic system health metrics only), as it focuses the team on customers.
- This also demands the team think 'operations-first' with on-call, alarms/etc
- Lastly, to reduce MTTD/MTTR, a suite of automated continuous integration testing in production is typically required for complex system journeys, so edge case risks can be vetted.
- Ultimately the root here is most likely centered in lack of good automated testing.
- Change failure recovery time (including escaped defects)
- Deployment strategies is the first place to look here. The best way to recover a failed deployment is limiting traffic to it via Canary so customer impact is limited, and rollback if it doesn't work.
- In cases where a change 'failure' is more of an unintended side-effect of a feature release/etc, loading those types of features in an experimentation framework with feature flagging is the right mitigation.
Building Teams
TODO
Growing Teammates
TODO