• Skip to main content

Ethereal Bits

Tyson Trautmann's musings on software, continuous delivery, management, & life.

  • Archive
  • About
  • Subscribe
  • LinkedIn
  • Twitter
  • Search

Unplugging in an On-Call World

April 1, 2019 by Tyson Trautmann 2 Comments

A few weeks ago, my family and I spent the weekend in the mountains and we left our screens behind. It was one of the best weekends of my life.

You don’t need me to tell you that we all spend too much time staring at screens. According to the Pew Research Center, 46 percent of Americans say they could not live without their smartphones for a single day. We crave constant access to information and we build our lives in a way where we become dependent on it, and in turn, that need for information begins to manifest in physical ways. For example, research that was recently conducted by Dscout shows that the average cell phone user touches their phone 2,617 times per day. Finally, the dependence on devices manifests in changes to the way we think. Studies performed at the Korea University in Seoul showed that smartphone addiction alters the chemical composition of the brain, leading to increased levels of anxiety, depression, and drowsiness.

For my fellow software engineers, the picture is even bleaker. The industry-wide move from shrink-wrapped software to online services coupled with the DevOps movement has resulted in engineers going on-call to respond to issues 24×7. Further, the transition from email and it’s implied “reply to me soon” SLA to chat tools like Slack that come with a more aggressive implicit “reply to me at all times” SLA has made it virtually impossible for engineers to ditch their devices, even when they aren’t on-call (side note: it’s shocking to me that almost all companies have opted into this regression).

Ironically, this sea-change of screen staring behavior has landed at the worst possible time. The gap between trivially repeatable jobs and demanding creative jobs continues to widen. The former kind of job is being rapidly automated while demand for the latter continues to increase. But doing creative work depends on the ability to go deep and focus for long periods of time, and the notification laden apps that populate our screens fuel a context switching mindset that is anathema to focusing.

A constantly context switching mindset is a killer for two reasons. First, it makes us less productive. Research by Dr. David Meyer, a psychologist who has devoted much of his research to cognitive focus and the impacts of multitasking, concluded that even brief context switching can reduce the total amount of productive time by up to 40%. Second, it prevents our brain from entering a state where it can engage with an idea deeply for a long period of time, which is necessary for everything from inventing something novel to carrying on a good conversation. Cal Newport covers this phenomenon in depth in his excellent novel Deep Work and concludes that “Efforts to deepen your focus will struggle if you don’t simultaneously wean your mind from a dependence on distraction.”

How did we all become addicted to a technology that is so destructive to us? It certainly wasn’t an accident. In his book The Power of Habit, Charles Duhigg explains how habits are composed of a three-part loop: a trigger, the resulting action (or program) that we follow, and the reward. The original trigger for the smartphone habit loop was a need for information. You want to know the weather forecast, the name of a song, what your friends are doing, or what your social media followers think of a particular post, so you pull out your phone to get the information and enjoy a brief dopamine surge from the information gleaned. That trigger still exists today, but it’s been mostly superseded by a new trigger that was created when Research In Motion first deployed email push notifications to Blackberry devices back in 2003. Smartphone users now invite applications to interrupt their lives at will through notifications of all sorts. Notifications create the ultimate habit loop: your phone buzzes or beeps in your pocket and your body can sense the impending glee that will come with finding out that your friend liked your latest social media post, making it almost impossible to not check the notification instantly.

It’s a grim picture, but our weekend in the wilderness convinced me that there is hope. My own personal journey to rid my brain of screen dependence started back at the start of the year. My wife and I have started a January tradition of sitting down, vanilla lattes in hand, to discuss our new year’s resolutions for the year and settling on a couple of shared resolutions for our family. For 2019, we set the goal of taking the family on one screen-free trip per quarter. A bit of planning and a car ride into the Cascade Mountains later, we found ourselves in the middle of our own little social experiment.

The impact of not having screens to pull us away was obvious almost immediately. Conversations were deeper because we were listening to each other instead of fidgeting with our phones in our pockets or pausing to check a notification. My wife and I both read hundreds of pages (from physical books!) that would have taken us days or weeks to get through in the fragmented schedule of our multitasking lives. We played games together, drank coffee together, ate amazing meals together, and were genuinely present. It was incredible how quickly I could feel my brain adapting to screen-less life, in a good way.

We’ve already begun planning our unplugged weekend for next quarter, but I’ve also spent the weeks since our trip thinking about how to bring unplugged ideas into my daily life. First, I’m looking at ways to pare back all notifications that aren’t absolutely essential or to restrict the time windows when I can receive notifications. Second, I’m looking at ways to unbundle the functionality that my phone currently provides so I can leave it behind more often. For example, I’m migrating back to physical books for reading, my Mighty for listening to music, and my trusty Canon 60D for taking photos. I’m toying with the idea of going back to a physical pager for work on-call escalation and I’m exploring options for maps and navigation (another side note: it turns out that maps, not calls, is the killer app for smartphones!).

Unplugging in an on-call world isn’t easy, but it is doable and it’s incredibly rewarding. It’s also a great way to identify tactical changes that you can bring back into daily life to reclaim your brain and improve your ability to focus.

LINKEDIN
Reddit
SOCIALICON

Universities Are Alive & Well

February 18, 2019 by Tyson Trautmann Leave a Comment

The impending demise of the university has been greatly overstated.

Since the launch of the first massive open online course (MOOC) back in 2006, a vocal group of university-dislikers have taken to social media aggressively to declare the university model obsolete and dead. The latest round of vitriol was seemingly spurred when Lambda School, a trade school that teaches coding to students online and uses an income sharing agreement in lieu of tuition, announced that they closed a $30M Series B funding round. University naysayers see new alternatives like Lambda School as attractive drop-in replacements for an outdated higher educational model.

Meanwhile, the data suggests that universities are alive and well, with most meaningful metrics moving up and to the right. Detractors rip the increasing toll of loans on graduates; a recent report from the Institute for College Access and Success claims that the average graduate takes on $28k in debt to attend school. But figures from a survey conducted by the Bureau of Labor Statistics show that attaining a university degree increases average annual earnings from $35k to $59k, dwarfing the cost of loan payments and showing that the broad financial value proposition of higher education is extremely attractive. Student demand for universities continues to skyrocket, particularly at top-tier schools. For example, the University of California Berkeley received 108k applications for admission in 2018, up 120% from the 49k that the university received in 2009. Demand from employers for graduates of top universities is also high. PayScale data shows that the average mid-career salary for a Stanford graduate grew from $112k in 2012 to $157k in 2018, an increase of 40% in just 6 years.

If students are turning out in record numbers to attend universities and the job market continues to place increasing value on a diploma from a high-caliber institution, why are people so quick to hate on schools and look for a new model? There are several good reasons. The first is that the current model is hitting scaling limits. 207M students around the globe are currently studying at higher-education institutions according to a paper published by UNESCO, but that’s still a relatively small fraction of the total population of 1B people between the ages of 16-24 and it would take decades to increase capacity by 2-4x with the current model. The College Board reports that the 10-year average tuition cost increase is roughly 5% per year, which is well ahead of inflation and will continue to make it even harder to make higher education accessible in poor areas. The second reason to challenge the current model is that it negatively impacts upward mobility by providing a disproportionately large opportunity for people from wealthy families. A report released by Opportunity Insights last year showed that 38 universities, including 5 of the 8 Ivy League schools, admitted more students from families in the top 1% of the income scale (families that earned more than $650k per year) than students from families in the bottom 60% (families that made under $60k per year).

Software scales infinitely, so it’s not surprising that people have embraced software platforms like MOOCs and software-powered coding boot camps as a means to make education scalable, provide access to all, and increase upward mobility. The problem is that current offerings like Lambda School that are being touted as university replacements are only offering a subset of what universities provide. The caliber of schools obviously varies wildly, but a good university offers students the following:

  1. Courses that provide deep knowledge in a subject of specialization. This includes the theory behind a subject, not just the practical. The theory is important because it provides a platform to understand the state of the art as practices evolve.
  2. Courses that provide broad knowledge in other subjects. Most innovation is going on at the intersection of multiple domains (eg. computer science and biology, cryptography and game theory) and jobs that are further from the bleeding edge will be earlier targets for automation. A general understanding of math, science, philosophy, ethics, and other subjects is more important than ever.
  3. An environment that encourages learning, innovation, and launching new ideas. There’s an unparalleled sense of energy that comes from dropping a group smart and ambitious students from multidisciplinary backgrounds into the same physical space. It’s not a coincidence that so many companies are founded on university campuses.
  4. A strong and diverse network. It’s impossible to overstate the importance of a strong network in business, and universities provide a unique channel to connect with fellow students and alumni to build a network.
  5. A credential. Achieving a degree from a university means that with some probability, the credential holder can work hard, is intellectually curious, can work in a team, and has achieved at least a base level of the kind of broad/deep knowledge mentioned above.

At best, current university competitors are offering bits of #1 (focused on practical knowledge, not theory) and a less valuable version of #5. That doesn’t mean that those competitors won’t ultimately be successful in disrupting universities over the long term. Lambda School is currently following Clayton Christensen’s disruption theory formula by serving customers at the low end of the market (students that don’t want to pay up front for tuition, so universities don’t want them) in a way that is “good enough”. But the combination of technical differentiation through software distribution and business model innovation through different payment models won’t be enough for those companies to go upstream into the broader market until they start thinking about the value proposition of universities more broadly and looking for creative ways to deliver on #1-5 above.

As a fan of both universities and Lambda School, I hope this happens because competition will ultimately result in a better product. We should all be cheering for innovation that increases access to education and levels the playing field for students. But the people that are proclaiming that universities are dead have jumped the gun.

LINKEDIN
Reddit
SOCIALICON

Finding Metronomes

February 10, 2019 by Tyson Trautmann Leave a Comment

Alberto Salazar is track coach and a former world-class runner who has held several American track records and has won both the New York City Marathon and the Boston Marathon. As a coach, Salazar looks for ways to blend the best practices in sprinting with those in distance running. One of his pieces of coaching advice for distance runners is to practice maintaining a high cadence (ideally close to 180 footfalls per minute) by running with an electronic metronome.

I read Salazar’s advice in an article and decided to give metronome running a try. As an amateur runner with a long natural stride, I’ve historically run with a low cadence that gets even lower towards the end of a run as I get tired. I assumed that my first couple of runs with a metronome would be slower while my body adjusted to a shorter stride and a quicker cadence, but the opposite was true; I ended up running some of my fastest times for the respective distances that I ran.

As I thought about the reasons for my improvement, it struck me that shortening my stride and improving my mechanics wasn’t the only benefit that I was getting from the metronome. The metronome was also simplifying the decision to keep my pace up. When I run without a metronome and I start getting tired, I’m faced with the frequent decision of whether to slow my pace to account for fatigue and, if so, how much to adjust my pace by. When I run with the metronome, I only infrequently need to decide that I’m going to continue to follow the metronome. In essence, the cognitive load of deciding how fast to run is reduced from high to low by turning a frequent continuous decision to an infrequent discrete one.

This is an incredibly powerful concept that isn’t limited to running. Changing behavior takes a massive amount of cognitive effort. In my experience, change is more likely to be successful if you look for “metronomes” that can reduce cognitive load than if you try to just brute force the change. Amazon CEO Jeff Bezos describes this phenomenon in his famous quote that “good intentions don’t work, but mechanisms do.” Mechanisms like Salazar’s metronome reduce the cognitive load of constantly trying to follow a good intention and increase the odds of success.

To think about how this plays out in the real world, consider an example: one of my New Year’s resolutions is to improve mental health by spending less time on my phone. I’ve put several “metronomes” in place to increase my odds of achieving the resolution. For example, I’m blocking out time on my calendar and using a Pomodoro timer app to restrict access to my phone for several hours in the evening while I hang out with my kids, read, meditate, and get ready for bed. My wife and I also just planned our first quarterly phone-free getaway weekend for our family as a means to recharge and reprogram our brains. It’s early, but I’m already seeing these mechanisms beginning to drive positive change that wouldn’t be taking place otherwise.

What behavior are you trying to change right now and how could a metronome help?

LINKEDIN
Reddit
SOCIALICON

Move Fast, in the Right Direction

January 30, 2019 by Tyson Trautmann Leave a Comment

Moving fast is great, but it doesn’t do you any good if you’re moving in the wrong direction.

Zack Kanter recently penned an article for TechCrunch where he made the case for startups to adopt a Serverless-first approach to increase software development velocity. “If the fastest startup in a given market is going to win” says Kanter, “then the most important thing is to maintain or increase development velocity over time.” He’s absolutely right. If a billion-dollar market exists, both other startups and incumbents in that market (or in adjacent markets) are going to compete for that market and velocity is going to play a large role in determining who wins, and going Serverless will increase long-term velocity. 

But there’s another most important thing. As Martin Fowler wrote several years ago, “the biggest risk to any software effort is that you end up building something that isn’t useful. The earlier and more frequently you get working software in front of real users, the quicker you get feedback to find out how valuable it really is.” The best way for startups to build fast feedback loops, validate new ideas, avoid shipping things that aren’t useful, and to stay pointed in the right direction is to adopt Continuous Delivery.

Continuous Delivery (CD) is the capability to get any software change into production quickly, safely, and in a sustainable way. In practice, it is achieved by automating entire the process of taking a code change to production, which typically includes building the change, testing the change, deploying the change to pre-production environments for further integration testing, and then finally having an option to deploy the change to production environments. Most modern tools for automating the release process describe the automation as a pipeline that new software revisions flow through. As Fowler points out, companies that are able to maintain CD over time prioritize maintaining working delivery pipelines over new feature development.

The Continuous Delivery landscape is riddled with domain specific jargon that can be confusing for people who are approaching the space for the first time, so it’s worth taking the time to understand a few related concepts. Continuous Integration (CI) is the practice of merging code back from development branches into a shared branch often, which requires thorough and reliable tests as well as investment in build and test infrastructure so that tests can be run against any change to validate that the change can be safely merged. Some people describe CI as being upstream from CD in the development and release process (which is where the term CI/CD comes from), but given the above definitions, it makes more sense to view CI as the early part of the CD process because it plays a vital role in empowering an organization to ship any change. Continuous Deployment is the process of actually releasing every change to production. You need Continuous Delivery to Continuously Deploy, but the reverse is not true.

None of these ideas are new. Kent Beck coined the term Continuous Integration twenty years ago as part of the Extreme Programming movement, Timothy Fitz first blogged about Continuous Deployment in 2008, and Jez Humble and Dave Farley published the first book on Continuous Delivery in 2010. So why is CD a big deal for startups right now? 

In their book Accelerate, Nicole Forsgren, Humble, and Gene Kim reveal their findings from several years of research about the way that teams ship software. Their flagship finding is that high performing teams achieve higher levels of throughput, stability, and quality without trading those attributes off against one another. More specifically, high performing teams deploy small batches of software changes to production frequently, which results in a lower percentage of changes failing, quicker resolution when changes do cause failures, and lower lead time to go from a customer making a feature request to shipping the requested feature. Perhaps most importantly, CD also lets these teams test the features that they’re shipping frequently so that they can validate them with customers and course correct if the features aren’t meeting a real customer need.

Despite these benefits, CD is still under practiced, particularly at small companies and startups. A recent study conducted by DigitalOcean found that 52.1% of respondents at companies with more 1000 employees reported using CD, but just 45.7% of respondents at companies with 6-25 people and 35.3% of respondents at companies with 1-5 people are using CD.  If startups win by moving quickly in the right direction, there is no reason for any software startup to launch without embracing CD on day one. The process of manually shepherding a dozen changes to production likely costs more than the initial investment necessary to automate the release and adopt CD.

The number of tools available for customers to use to adopt CD and automate their release process is growing rapidly. For companies that deploy their infrastructure to Amazon Web Services, AWS CodePipeline lets customers automate their release process from hosted source code repositories like AWS CodeCommit or Github all the way to deployed services running on AWS EC2, ECS, Lambda, or other AWS environments. Google’s CloudBuild allows customers to create pipelines that use built-in integrations to deploy to Kubernetes Engine, App Engine, Cloud Functions, or other GCP environments. Microsoft’s Azure DevOps product includes Azure Pipelines, which can automate releases not only to Microsoft offerings like Azure VMs and Container Service, but also to other cloud providers. Elsewhere in the cloud ecosystem, products like Jenkins, Travis, CircleCI, and GitLab are all popular solutions for automating the release process. Microsoft’s $7.5B acquisition of Github has caused a fresh wave of excitement about the developer tools ecosystem from early-stage investors, which is fueling a new wave of exciting startups that are pushing the CD frontier forward while driving down costs.

Further, CD is eating the entire stack, which is opening up new potential benefits for adopters. The rise of Infrastructure as Code (IaC), provisioning resources up and down the entire software and hardware stack through software instead of a manual process, has made it possible to apply the principles of CD to the entire stack. Companies that leverage IaC offerings like HashiCorp’s Terraform or AWS CloudFormation can take advantage of the CD products listed above to deploy low-level compute and storage resources continuously in the same way as software. Need to spin up capacity in a new Virtual Private Cloud in a different region to add redundancy and reduce latency? Push a template change through your infrastructure pipeline and the new resources will be provisioned for you with the same kinds of testing and blast radius mitigation that you’re using to getting from a software pipeline.

Every startup is ultimately a factory that takes time and capital as an input and produces a product or service as an output. The company that wins in any given market is determined by who reaches product/market fit first. First-mover advantage only goes so far; battles are ultimately won by maximizing velocity and building tight feedback loops to stay pointed in the right direction, and Continuous Delivery one of the very best available tools to achieve that outcome.

LINKEDIN
Reddit
SOCIALICON

Scrum, Control Planes, and Data Planes

January 17, 2019 by Tyson Trautmann Leave a Comment

It’s not uncommon see tweets bouncing around the twittersphere describing how agile methodologies for software development are bad because of [insert problem here] and can be fixed by [insert solution here]. This week, a tweet by Ryan Singer caught my eye:

“Been talking to CTOs and Product Managers about their processes. Themes: low morale among engineers and the feeling they aren’t making meaningful changes to the product. Two-week sprints, Scrum and JIRA contribute to a high-overhead, low-productivity culture of micromanagement.”

Ryan correctly identifies an issue that every engineering manager needs to be cognizant of: engineers need to feel like they are making meaningful changes to the product or team morale will be low. In my opinion, however, his proposed solution misses the mark. In a subsequent tweet, he suggests that teams adopt longer sprints where engineers are free to define tasks and implement them how they see fit within a sprint. He’s proposing a Data Plane fix for a Control Plane issue.

In the world of routing, the Control Plane manages the rules and configuration of the system, including the current known network topology and other information necessary to route packets.  The Data Plane forwards packets based on the information that has been provided by the Control Plane. The Data Plane is essentially the execution engine, while the Control Plane is providing the rules with which to execute. The Control Plane and Data Plane operate in separate threads and have very different traffic profiles. The number of requests to the Control Plane is relatively few, so handling requests can take orders of magnitude longer, but it’s important that every request is received and handled correctly. The Data Plane, on the other hand, is typically flooded with requests that must be handled quickly, but it’s less critical that every request is forwarded optimally because errors can often be handled downstream. The integration point between planes is typically the Routing Information Base (RIB), which is written to by the Control Plane and read from by the Data Plane.

In Scrum, the Control Plane is the work that is traditionally done by a Product Owner to generate the Product Backlog as well as some of the work that is typically handled by a Scrum Master to define the parameters under which the process operates (sprint duration, ritual details, etc.). The Data Plane includes all of the rest of the work that is done by the Scrum Master and the Development Team to pull work from the Product Backlog into the Sprint Backlog and execute on that body of work. Like routing, the Control Plane and Data Plane operate on separate logical “threads”; a different cast of characters is usually involved in the decisions made on each plane, and the cadence of delivery is different (eg. writing to the Product Backlog does not follow the Sprint cycle). The integration points between Scrum planes are the the Sprint Planning process, where work is pulled from the Product Backlog into the Sprint Backlog (the equivalent of the routing Data Plane reading from the RIB), and the Sprint Retrospective process, where feedback logically flows back into the Control Plane and can be used by the Scrum Master to change the parameters of the Scrum process.

Based on the context, Ryan’s statement that engineers aren’t “making meaningful changes to the product” could mean one or more of the following:

  1. So much time is being spent on Scrum rituals that engineers have no time to engineer.
  2. Project management tools like JIRA take so much time to use that engineers have no time to engineer.
  3. Engineers have no mechanism to give input into the Product Backlog; the Product Owner is dictating product changes without taking feedback and the Scrum Master is also being over prescriptive about implementation details.

#1 and #2 are Data Plane issues that I have trouble buying. On teams where I’ve run 2-week sprints, I’ve typically held a two hour Sprint Planning meeting, one hour Sprint Review meeting, one hour Sprint Retrospective, and fifteen minute daily Stand-up meetings. That means that <10% of our Sprint time was spent in team rituals. In my experience, the time necessary to keep task status up to date in JIRA or another project management tool once the Sprint has been planned is also trivial in comparison to the amount of time available in a Sprint. I’m not against making Data Plane optimizations to these processes, but there isn’t much to be gained.

#3 is a Control Plane issue that I have seen many times, but it cannot be adquetely solved by changes to the Data Plane. Engineers care deeply about the products that they work on and have unique insight into the details of the product and how it is implemented. It’s critical to inject engineers into the Control Plane by building mechanisms that give them a seat at the Product Owner table. On my team at Amazon, we do this a number of ways, including engaging engineers in our Working Backwards process (eg. having them assist in writing PR/FAQ docs for new features that we’re considering) and by including then in roadmap planning exercises as a key stakeholders and contributors.

If you want to boost team morale, avoid micromanagement, and ensure that team members can meaningfully impact the product, plug engineers into the product planning process instead trying to hack product planning into the middle of a Sprint.

LINKEDIN
Reddit
SOCIALICON

On Teams & Problem Spaces

March 3, 2018 by Tyson Trautmann 6 Comments

Teams are most effective when they are organized around problem spaces and are explicitly named after the problem space that they’re solving. Unfortunately, if my experience is representative, this isn’t the norm. If I think back on the organizations that I’ve been a part of as an individual contributor or inherited as a manager at Microsoft, Amazon, and Riot, I’ve frequently seen teams organized around products, solution spaces, and code names. To explain each option, consider a fictitious team with the following properties:

  • The team is part of a larger organization that builds tools for feature teams to deploy, operate, and scale backend services on the company’s private cloud.
  • The team previously decided that its mission is “to make it easier for feature teams to operate deployed services” and its vision is that “services are highly available and easily scalable with virtually no operator intervention”.
  • The team’s current flagship product is an alerting and monitoring system called Brometheus.
  • The team happens to be obsessed with Star Wars (who isn’t?).

Now suppose you’re tasked with naming the team. You could name the team the Brometheus Team after it’s most important product. Alternatively, you call them the Monitoring Team, since the solutions that they currently own are in the monitoring space. You could accept the suggestion of a couple of team members that believe that names don’t matter and have pushed to be called the Jedi Order. Or, you could name the team after the broader problem space and call it the Operability Team.

The final option is superior to the other options for several reasons:

  • Unlike the product and solution space options, it doesn’t constrain thinking and imply a particular solution. If you’re the Brometheus Team, your work will always revolve around improving that product (you have a hammer and everything will look like nails). Similarly, the Monitoring Team will always focus on building monitoring products. The Operability Team is free to decide that they can make it easier to operate services by working on auto-scaling, service call-tracing, or debuggability instead.
  • Unlike the code name option, it’s immediately obvious what your team does and what sandbox you play in. The Jedi Order may sound cute and the team may rally around that identity in the short term, but at scale, it becomes difficult for customers to keep a mental map of who is working on what. Further, the team’s new identity is not rooted in being a team and not the problem they’re solving; the former tends to fizzle much more quickly.

Reorganizing or renaming teams can be hard work, particularly if the team’s current identity is tightly wound around their current name or organizational structure. But in the long run, doing the work to identify problem spaces, organize teams around those problem spaces, and to explicitly name teams after the problem space they’re tackling is worth it.

LINKEDIN
Reddit
SOCIALICON
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Copyright © 2019 · Atmosphere Pro on Genesis Framework · WordPress · Log in