The Algorithmic Bonus Mindset
A bonus is an unexpected extra reward that you did not factor into initial risk/reward calculations; a sign of serendipity in a process. But somehow, we've gotten into the bad habit of thinking of bonuses as expected but uncertain rewards that can be reliably obtained by "overperforming" in some way, relative to the preset expectations of a human counterparty like an employer or client. From Wall Street bankers to restaurant waiters and Uber drivers, everybody "plans" around supposedly "bonus" rewards. We even bring notions of efficiency to this. We try and "hack" reward functions for bonuses the way AIs do, as though we were rats in a closed-world Skinner box.
An expected bonus, like a wedding gift registry, is almost a contradiction in terms. If you can anticipate it, you can factor it into risk/reward estimations and modulate effort in response. That's not a bonus. That's just regular risk management and planning under uncertainty, with some people designing incentives for other people to game. Let's call these artificial bonuses. The opposite of an artificial bonus is a natural bonus: genuinely unexpected extra rewards that are the ex-post result of deriving opportunistic value from unexpected insights and discoveries. To get a natural bonus, you don't need to over-perform, you need to be open-minded and alive to out-of-scope opportunity.
Information work is particularly rich in a particular kind of natural bonus: the algorithmic bonus. When work primarily comprises programming machines that labor rather than laboring yourself, an insight or discovery generally allows you to gain a bonus by rethinking the scope of what you're doing. This is not "thinking outside the box" so much as rethinking the [Skinner?] box itself.
The pure refactor that just adds a dose of lean efficiency is rare. The picture below illustrates the difference between the two kinds of bonus. An artificial bonus cashes out a lean efficiency gain as a time savings (right) relative to a nominal plan (top). An algorithmic bonus cashes out an unexpected "fat" gain by expanding scope (left) with extra tasks E, F, and G, relative to the nominal plan.
How do you systematically catalyze things like the left picture, and why?
Algorithmic bonus (left) versus efficiency bonus (right)
1/ How do you cultivate an ‘unexpected genuine bonus’ attitude to work that catalyzes actionable insights, discoveries, inventions, and spillover societal value? And why?
2/ Let's tackle the how first. Interestingly enough, the trick to this lies in a dilemma within the mundane, everyday problem of valuing effort for pay.
3/ Whether you make money on a salaried, hourly, project-based, or outcome basis, people generally don’t want to pay for either effort OR output in information work.
4/ They want to pay for apparent effort visible in output, with 20/20 hindsight, with no obvious lower-effort paths than the one taken.
5/ Nerdy point: This is ex-post modal rationality: it’s like conducting a Vickrey (2nd-price) auction of counterfactuals.
6/ Whoever did the work “wins” the reward, but is paid the bid price of the fictitious second-price producer who coulda shoulda woulda have done it cheaper via the benefit-of-hindsight shorter path.
7/ So unexpected elegance in output can end up devaluing the effort that went into creating it. 20/20 hindsight represents a _risk _in creative work that leads to imagination inhibition.
8/ And no you can’t prescope discovery: “I will stumble on 2 insights while coding this feature that lead to a 10x more elegant third iteration for which I will charge $X per line."
9/ This means things that take a complex path to get to but can be vastly simplified with hindsight insights once you’re there — the essence of art+science beyond craft — are systematically undervalued. This is the reason for a lot of obscurantism in presenting the output of work.
10/ The maker’s dilemma: if you DON’T make the obvious simplifications apparent with hindsight as you do the work, the customer might spot them in the output and think you’re stupid.
11/ If you DO make them, they’ll assume you put in less effort than you did, based on the apparent visible effort, and resent having to pay you the estimated price.
12/ This dilemma is inherent in all types of information work, but especially kinds where there are unreasonable effectiveness mechanisms like math, algorithmic structure, or laws of physics. Hidden “nature’s gift processes” entangled with the work processes.
13/ Sometimes the dilemma is avoided by the essential insight being so non-obvious, you can act on it and reasonably make the case that you couldn’t have anticipated it. But this is often much harder to do than people think. Insights have a bad habit of appearing obvious ex-post.
14/ There are ways to mitigate this, 2 bad, 1 good. The two bad ways, which both rely on bullshit jargon are: a) Obfuscate potential simplifications and deliver needlessly complex output 😖 and b) Obfuscate elegant output so apparent effort equals actual effort 🤮
15/ The good way is c) Parlay elegance from hindsight simplification of original work into BONUS output and figure out how to derive value from it later 😎.
16/ A bonus is low-marginal-effort/high-marginal-value unexpected extra output that cashes out discovered elegance in a solution to a problem via speculative scope expansion that helps map the value landscape better.You basically create and pick low-hanging fruit people didn’t think was within reach, and weren’t aiming for.
17/ It is exceeding expectations but not to demonstrate a superhuman work ethic. Instead it is exceeding expectations as a consequence of unexpected discovery, by spilling out of the boundaries of the expectations and saying "we drew the box wrong, here's a better box."
18/ You’re saying, “this turned out to be more delightfully elegant than we thought when scoping, here’s a cherry on top, and let's talk about updating the scope based on what we now know!”
19/ Your own expectations of effort/reward ratio were pleasantly violated and you’re capturing and passing on illustrative samples of the excess reward. As a result, total apparent effort equals total actual effort but everybody gets more value than they expected, with no dumb obfuscation.
20/ For this to happen, a bonus needs to feel like a good gift. Something that expands minds by showcasing the abundance you’ve stumbled across. It should address a subconscious want that’s outside the plan, rather than a conscious need that’s within it. Not 20% more; 20% different. Value, not price.
21/ Bureaucratic organizations and computers by definition lack the judgment and agency to recognize and appreciate such bonus output and reward it accordingly. In fact particularly primitive bureaucracies can't even accommodate things like being early or saving money, and often penalize even such narrow overperformance.
22/ It takes a human with agency and judgment to actually change their mind about what is valuable based on new information, discoveries and qualitatively different outcomes relative to expectations. A natural bonus is an implied rewrite of the reward function of work, ex-post.
23/ The thing about discovered elegance (ie potential for hindsight simplification in the output of complex effort) is that it’s not necessarily “efficiency” in the sense of achieving a goal with less resources, gaining “savings”. You’ve already sunk inelegant effort. So what to do?
23/ If you’re on an iterative learning curve, maybe future instances of effort can be cheaper. For example, in doing a manual analysis, you spot an elegant algorithm for automating it that makes future instances much cheaper and faster. That’s close-ended learning. Lean learning.
24/ But discovered elegance is rarely that limited. Nature rarely hands you “lean” gifts that can only be used to make the next instance cheaper. Nature is not a “25% off next purchase” coupon gifter. Nature’s gifts have unexpected “fat” spillover potential, like a cashback. They add potential energy.
25/ But to actually claim the natural bonus, you have to open up the scope of what you’re doing. You have a hammer in your hand you didn’t expect to have. You must look around for nails you didn’t think could be hammered. If you don’t, and only look within existing scope, you will likely lose, net.
26/ The hammer in this metaphor can be thought of as discovered intellectual property (IP). Depending on the nature of the work context, you may have some claim on the IP itself (for example my former employer Xerox had a mechanism to assign IP it had no use for back to inventor employees).
27/ If for example, you invent a better mousetrap while employed at Mousetraps, Inc. you may lose some pay for sunk effort (it looks like a day’s work, actually took a week) but get a patent bonus, and maybe rights for non-mousetrapping applications. But these are side issues about how to cash out the discovery.
28/ The deeper point here is developing a bonus/spillover ‘lucky’ mentality and actively looking for elegance and discovery in everything you do. Even at the risk of lower immediate reward due to maker’s dilemma. In the short run, in specific gigs, you may lose rewards.
29/ But in the long run, you’ll develop a reputation for being unreasonably lucky and inspired. A done, and gets things smart genius rather than a mere lean, mean worker-bee machine.
30/ A bonus mentality is also the trick behind “talent hits what others can’t hit, genius hits what others can’t see”, can you guess why?
31/ It has to do with “obviousness” in where you’ve come from versus in where you could go.
If an “obvious” insight at line 800/week 3 of coding V1 of a program leads you to actually deliver a 20-line V2 that takes 1 hour to write, your client will feel cheated if you charge more than a day.
32/ But the 3 weeks are not waste, they’ve refactored your perception! The client is the beneficiary of a more cheaply hammered nail, but you’re the one with the new hammer, capable of seeing the nails others can’t see. They will... after you point them out, and hammer a few bonus nails to teach them.
33/ Those bonus nails are how you capture more value from your insights AND ensure everybody groks the potential so it’s turned into spillover societal value. That is the big prize. So don’t let the maker’s dilemma in creative work inhibit your openness to discovery.
34/ The algorithmic bonus rescopes what can and should be attempted unlike an artificial bonus, which is basically the uncertain but predictable value of an efficiency gain. The algorithmic bonus uncovers unseen targets by hitting them, resetting future targets.
35/ An algorithmic bonus couples effort to reward in an intrinsic way and forces you to revalue what you're doing at the level of fundamentals, with the benefit of hindsight. You're not merely taking (say) a cash or time savings to the boss and expecting a percentage of savings.
36/ An algorithmic bonus changes the goal based on what you've learned along the way. It is a way of realizing the value of the play and exploration inherent in information work (see my February newsletter with the how/why 2x2 for more insight into this).
37/ This process of "fat" exploration and play leading to discoveries and algorithmic bonuses is not just creative fun. It is the engine of organic innovation, and the main job humans in a world where machines can take over jobs the moment we can define them with reasonable clarity.
38/ In the future, all technological progress will increasingly depend on people with an algorithmic bonus mindset. People whose motivational patterns are wired to the explicit reward function will either be automated out of existence, or turn into problems for others.
39/ The industrial age paid lip service to these ideas with concepts like "continuous innovation," "kaizen", and "learning to learn," but these ideas naturally belong in the world of programming and algorithms, where they are the essence of work rather than an "innovation" epiphenomenon to manual laboring.
40/ In summary, if you're not constantly rethinking the target, you're not the archer, you're the arrow. AIs hit targets humans can't hit, but humans hit targets AIs can't see. Yet.
Sidebar for New Readers
I think I'm overdue for one of my occasional re-introduce-myself sidebars for the benefit of new readers. Now is a good time, since we seem to have had one of our occasional mysterious sudden influxes of new subscribers. About 136 new readers added since last week, putting us over 5700 now. So those of you who decide to stick around... welcome aboard. Those of you who don't, I hereby break up with you first.
You can find the archives here, and the link is always at the bottom of every mailing. There have been over 90 issues so far, and I've forgotten what I wrote in half of them, so you may find I occasionally repeat or contradict myself.
As you may or may not know, this is a weekly newsletter associated with the Breaking Smart site, where I publish in-depth seasonal essay collections on technology and society. Season 1, published in 2015, was on software eating the world and is available in English, French, and German. Season 2 will hopefully be out this year.
This weekly newsletter features more off-the-cuff thoughts in tweetstorm, short essay, or notebook entry form. But it's primarily an excuse for me to practice my iPad+Pencil drawing skills, so make sure you set your email to 'display images'.
You can also find my writings on other topics (and lots of other writers) on the blog I founded and edit, ribbonfarm, and my shitposting on twitter at @vgr. Stalkery types can find out more about my shady past on my main biography site, venkateshrao.com.
Welcome aboard!
Feel free to forward this newsletter on email and share it via the social media buttons below. You can check out the archives here. First-timers can subscribe to the newsletter here. More about me at venkateshrao.com
Check out the 20 Breaking Smart Season 1 essays for the deeper context behind this newsletter. You can follow me on Twitter @vgr
Copyright © 2018 Ribbonfarm Consulting, LLC, All rights reserved.