Feed aggregator

Test Automation Patterns: Issues and Solutions

Agile Connection -

Automating system level test execution can result in many problems. It is surprising to find that many people encounter the same problems yet are unaware of common solutions that worked well for others. These problem/solution pairs are called “patterns.” Seretta Gamba recognized the...

Lightning Strikes the Keynotes: STARWEST 2014

Agile Connection -

Throughout the years, Lightning Talks have been a popular part of the STAR conferences. If you’re not familiar with the concept, Lightning Talks consists of a series of five-minute talks by different speakers within one presentation period. Lightning Talks are the...

The Rookie vs. the Veteran: How to Take Agile to the Next Level

Rally Software - Agile Blog -

How does a 120-year-old insurance company get more value out of its agile transformation in 2 years than a high-tech company that’s been practicing agile for 14 years? Well, it has something to do with bad habits that form when organizations don’t scale agile beyond the team level. Or they coordinate work to include the business and program management roles but don’t focus on best practices and continuous improvement to maintain results.

Here are some common traps organizations can fall into around team-level agile:

The Easy Road

It’s human behavior to take the path of least resistance. In the context of agile, I’ve seen teams and delivery groups (even those that religiously do retrospectives) take the easy route to tackle a problem rather than take on time-consuming — and sometimes contentious — changes to improve how they work. This ultimately leads to technical debt, and worse, unhealthy agile practices.

Developer-only Agile

The whole premise of agile is to connect the business and development to deliver value. Even in companies where strong development teams are killing it with agile, it’s pretty common for those teams to exclude outsiders. When that happens, teams start working in isolation, losing sight of what other teams and the business need, when they need it. Responsiveness, predictability and value delivered quickly become disconnected from market windows and what customers want. Organizations trying to retain valuable programming talent do the same thing — make decisions that keep developers happy instead of thinking about what’s best for their customers and the company.

Managing the Matrix

Enterprises experimenting with agile often try it within existing organizational structures. While agile teams can exist that way for a while, more often than not they end up isolated and can’t consistently deliver the value that the company needs to win in the market.

Developer Musical Chairs

Sustaining dedicated teams over time is key to agile success. When you start moving developers around to solve various problems that pop up, you create an extra learning curve, lower capacity and, sometimes without intending to, make decisions that can derail the features you’re trying to deliver.

Unmanaged Chaos

Companies that start to slack off on disciplined agile practices (like Kanban and Scrum) end up with highly reactive environments. This creates hidden work, high levels of work in process (WiP), lack of focus and even purposeful focus away from the company’s vision. Teams feel helpless and frustrated because they’re constantly playing defense.

Where Do You Start?

Sure, these problems can be overwhelming and prompt organizations to start questioning their investments in agile. But you can get back on track just by getting back to the basics.

Get back to basics. Re-establish great team-level practices (proper Scrum and Kanban), limit WiP and use metrics to help teams stay focused. Create value streams that follow the work, maintain dedicated teams and build strong delivery groups.

Adopt agile at scale. You won’t get to the next level of agile just by doing it at the team level — you need to launch a whole program right from the start. Organize for the work and find the courage to make your company structure part of the transformation. Create an agile center of excellence — and give it authority and funding — that guides the practices and evolution of your organization’s agile teams, delivery groups and people.

Include product and program management. Understand how to build a good agile roadmap and collaborate with your customers. Talk to them not only about what you’re currently delivering but about your future plans — and smooth the flow of work to teams so they’re working on the right things at the right time.

Invest in continuous improvement. Establish a culture of continuous improvement and support it with a mix of qualitative and quantitative measurements. Strategize collaboratively with big room planning that includes everyone in the delivery group. Watch how our customer, Seagate, uses big room planning to speed delivery, save costs and become a predictable engine for the business.

Achieve 4x Improvement

Achieving the true promise of agile (4x improvement) comes by connecting the whole system (not just teams) and making sure you’re always transforming and fine tuning to guide your agile journey.

Remember those two companies I mentioned? By launching its agile transformation at scale from the get-go, the insurance company (the agile rookie), Physicians Mutual, saw a 50-percent increase in major release frequency and the delivery of hundreds of items during the course of just one year. Read the case study. I was directly involved in helping the high-tech agile veteran (Rally) get back to basics to achieve 2x feature output and deliver 100 percent on our roadmap commitments.

If you want to learn more about how to effectively scale agile in your organization, I’m hosting a “Next Level Agile” webinar on August 19 at 11 a.m. MST in the U.S. and 2 p.m. BST in the UK. Register here for the North America webinar and here for the EMEA webinar.

Ryan Polk

Three Leadership “Musts” for DevOps

VersionOne Agile Management Blog -

 

 

 

 

 

 

 

 

 

A middle-of-the-night phone call is never a good thing – especially when the director of technology operations is on the other end. It was 2:00 a.m. in the summer of 2003 when I was abruptly awakened from my phone’s vibration.

My nightmare started as the director of technology operations reported that the system was down with no resolution in sight.

A company system outage is comparable to cutting off blood flow to the brain. When the system is down, there’s no cash and the business starts to die. No matter the size or stature of a company, technology leaders constantly carry the fear that even the smallest system outage could seriously damage their work. While this fear is hidden deep inside the psyche, it’s a reality that all tech leaders learn to live with.

My system outage was no different from eBay’s in the summer of 1999. The eBay auction site suffered from a series of system outages – the longest outage lasting 22 hours.

That outage cost eBay $5 million of transaction revenue. This $5 million may sound like a lot, but, in reality, it was nothing compared to the $4 billion drop in the company’s market value as the result of the outage.

WHAT ABOUT NOW?

Almost two decades later, and technology experts are still experiencing major complications in their systems.

In July 2015 alone, recent outages and system failures have affected the New York Stock Exchange, United Airlines, Department of State’s Visa system, Apple’s iStore, and – most notably – the Royal Bank of Scotland’s IT systems, which reported that a half-million financial transactions vanished from the system due to an unknown error.

They say “time heals all wounds,” but system outages may be the exception to this rule, as effects can be severe.

“For the Fortune 1000, the average total cost of unplanned application downtime per year is $1.25 billion to $2.5 billion,” says Stephen Elliot, IDC Analyst. “The average cost of a critical application failure per hour is $500,000 to $1 million.”

We are still not immune to these outages and we must take great care in avoiding these issues, or risk losing time, money and business.

WHAT IS HAPPENING?

Today’s systems are growing exponentially more complicated. Rising demand, volumes of aging data, patchwork of software, and network infrastructure each impact a system’s complexity and deployments. IDC estimates that the average amount of monthly deployments will double in two years.

To combat the case of system failures, technology leaders must adjust to an era of instant consumption – the “have to have it now” era. The world is no longer satisfied with singular massive updates to their systems every 12 or 6 months. Rather, we need to “deploy on demand,” where software can be updated several times per day with 100% resilience.

In other words, we need DevOps.

THE WAVE OF CHANGE IS HERE

Simply described, DevOps is the collaboration and communication between software developers and technology professionals in the IT value chain to deploy software to customers. Gene Kim, author of The Phoenix Project refers to DevOps as “the outcome of applying Lean Principles to the IT value stream.”

To achieve greatness, DevOps demands leadership vision and involvement. This requires sponsorship, so operational and cultural norms can change. It’s likely that your company will need to incorporate all of these changes to ensure long-term success.

DevOps is successful because it dramatically reduces a company’s operational risk by creating conditions that advance company culture, interactions, and tools.

Imagine a world where product, development, QA, infosec, and operations are orchestrating together to deliver business value at the fastest pace possible in an “IT value stream.” And fast execution isn’t the only benefit here – the process also has high predictability and low risk. This symphony of establishing a reliable flow across the organization – along with cultivating the right culture – is the foundation on which change can be made.

In May 2011, LinkedIn’s valuation doubled to $9 billion on its second day after IPO. With the stock soaring and a flood of new users flocking to the professional social networking site, LinkedIn was Wall Street’s golden child. Kevin Scott, LinkedIn’s top engineer didn’t feel as confident. Scott knew that the system and its engineers were being crushed by it’s own technology infrastructure inhibiting growth.

In a bold move, Scott launched Project InVersion, an initiative where all new feature development for LinkedIn stopped so that every engineer focused on rebuilding its core technology infrastructure. “You go public, have all the world looking at you, and then we tell management we’re not going to deliver anything new while all of engineering works on this project for the next two months,” Scott says. “It was a scary thing.” This work centered on LinkedIn’s ability to build out DevOps so that it could scale and accelerate while eliminating technical risks.

This resulted in extending the company’s deployment capabilities so that it can deploy changes at a moments notice at any time of day. Further, it helped support the growth of LinkedIn’s user base to over 364 million members and a market cap of $28 billion.

THE THREE LEADERSHIP MUST-HAVES

Gene Kim describes DevOps as a “philosophical movement.” And he’s right. As DevOps garners more attention, experts are deliberating its “best practices” and developing tools to support those practices.

To enable success, I have found there are 3 “musts” that leadership should have when launching a DevOps movement. These “musts” are based on the premise that DevOps requires disruptive leadership.

1. Executive Involvement

Leaders, including the CTO and the CEO, must work together to make DevOps a strategic priority. Just as soldiers, airplanes, satellites, and technology are strategic assets for the military, technology leaders need to utilize DevOps assets to achieve their goals. Leaders should engage with business counterparts when harnessing the strategic value of DevOps.

Successful DevOps transformations require executive participation and understanding. With the DevOps’ unification of the technology value stream, it becomes a unique strategic capability that enables faster innovation and faster time to market.

2. Organizational Design Focused on Agile Value Delivery

DevOps transformations are not simple. They are difficult and require creativity which leads to a journey that not all people in your company are prepared to take.

Value Driven Organizations

The best way to confront this challenge is to develop a healthy organization design. Separate organizational silos split by domains may be traditional; however, they are no longer effective. Many organizations, particularly those using Agile, are experiencing success by building cross-functional teams. Each team creates work in segments of time, or “sprints.” Each sprint results in the team delivering potentially shippable increments of work product. Moreover, place more emphasis on grouping team to swarm on delivering shared objectives. This structure will have a powerful effect on your company’s ability to collaborate and build business value.

This approach places more emphasis on teamwork. The teams design, build, and test as a team. And, throughout the development process, these teams actively coordinate with Technology Operations, InfoSec, and others to ensure that their work can be deployed.

Craftsmanship & Automation

Great DevOps companies require thoughtful and deliberate decisions to encourage great engineering craftsmanship. This craftsmanship ensures software is built with practices that encourage high quality product. The practices we follow should focus on receiving fast feedback on whether or not the code really works.

Today, practices like Test Driven Development (TDD) are used to create lines of tests before the code is written. By writing the code to after the tests are created, developers create a collection of code that, by definition, is already tested before it’s finished, thus reducing errors in increasing quality.

Automation is another key element to the product development flow. Once automated, a developer can automatically test the code with a simple click. The system can test the changes across thousands of developers’ new code in a fraction of time compared to manual tests.

3. Synchronized Product Planning and DevOps Planning

Several successful DevOps groups are also accelerating their delivery capabilities with support teams. Technology operations, infosec, architecture, and risk/compliance teams are often involved in product planning.

This results in a higher degree of coordination in the product development cycle. Aspects of security, scalability, reliability, are baked into the solution from the earliest stages of planning. Moreover, by tying together areas of release management practices at the beginning, the organization’s ability to coordinate product delivery matures faster.

DevOps may seem like a lot of work, but technology leaders should consider it a smart business investment. Companies unwilling or unable to adapt will be left behind and trapped under the weight of their own antiquated practices. Those slow to react will not be able to compete due to limitations of deployment speed and resiliency. However, it’s the companies employing DevOps that will outmaneuver and outpace their competition, leaving others in the dust.

Stacey Louie is the CEO of Bratton & Company, a leading Agile Transformation consultancy based in the Silicon Valley. As an Enterprise Agile Coach, he was instrumental in PayPal’s 400 team global agile transformation as well as supporting other Fortune 500 companies including Cisco, Hewlett Packard, and eBay. He also held the position of division CTO/CIO of public companies including Verisk Analytics and Stewart Information Systems.

 

Welcome to Agile: A Developer’s Experience

Agile Connection -

In this article, a developer shares his personal experience with the transition from a waterfall environment to an agile one. He compares what it was like for him coding, learning, and communicating using each methodology, and he shares what it was like making the change to agile—and why he's never looking back.

Interview with Alistair 2015 on Software Requirements Specifications

Alistair Cockburn -

Short interview with me on user stories and use cases versus Software Requirements Specification

https://www.healthcareguys.com/2015/07/23/the-differences-between-user-stories-and-software-requirements-specification-srs-interview-with-dr-alistair-cockburn/

The Differences between User Stories and Software Requirements Specification (SRS) – Interview with Dr. Alistair Cockburn

Posted on July 23, 2015 in Education, Research
This is the fifth post in the series in which I have interviewed several Agile experts to reveal the differences between user stories and software requirements and their application in regulated systems (i.e. health IT systems). You can find the previous post in this series here.

Today’s interview is with Dr. Alistair Cockburn, one of the original creators of the Agile Software Development movement and co-author of “The Agile Manifesto.” He was voted as one of the “All-Time Top 150 i-Technology Heroes” for his pioneering work in the software field. Besides being an internationally renowned IT strategist and author of the Jolt award-winning books “Agile Software Development” and “Writing Effective Use Cases,” he is an expert on agile development, use cases, process design, project management, and object-oriented design.

In 2003 he created the Agile Development Conference; in 2005 he co-founded the Agile Project Leadership Network; and in 2010 he co-founded the International Consortium for Agile. Many of his articles, talks, poems, and blogs are online at http://alistair.cockburn.us.

Do you think that “user story” is just a fancy name for SRS?

No, but close.

How do you compare a user story with SRS?

A story must be end-user visible, something that an end user declares is valuable to him/her, and implementable in one sprint. SRS does not have those characteristics.

Do you think that user stories replace SRS?

Yes.

Which of the two do you prefer working with?

Neither. Use cases + user stories works well.

Which of the two methods do you recommend using for regulated systems (i.e., health IT systems, medical device software)?

I do not recommend user stories for regulated systems. I do not recommend SRS ever.

Interview with Alistair 2012 Audio Interview for Agile Revolution in Australia Nov 2012

Alistair Cockburn -

Craig Smith, Renee Troughton and Tony Ponton in Brisbane (Australia) caught hold of me and tied me down, plied me with wine while they joked about and asked questions… the result on the http://www.theagilerevolution.com/ site is their podcast #49, about 45 minutes of fun interview (missing only the Oath of Non-Allegiance (discussion: Re: Oath of Non-Allegiance), which they say they’ll get in the next interview).

Check out http://www.theagilerevolution.com/episode-49

Some timing marks:

  • Advanced Agile at 5:30
  • 1st of Tony’s funny faces about 9:45 – 10:00
  • Agile manifesto “x OVER y” 19:30
  • 2nd Tony funny face about 22:20
  • Organizational transformation about 26:00+
  • Renee speechless for a few seconds about 27:00 – 27:15
  • Baby crying “mummy” at 20:00
  • IC Agile at 42:00
  • can’t find tony’s second funny face where he nearly spills the wine, let me know if you find the marker.

How to Collaborate in DevOps Software Development

VersionOne Agile Management Blog -

 

 

 

 

 

 

 

DevOps Software Development is new to many organizations and figuring out how to best collaborate can be challenging. One of the recurring roadblocks experienced by the organizations we serve revolves around collaboration. What are some of the difficulties they face and how can DevOps address these to help deliver great software and build systems that scale and last?

At Blue Agility, we have been leading large-scale agile transformations to help our clients align business and IT, achieve faster time-to-market, and remain competitive in the current marketplace.

DevOps Software Development

Software development is an intense collaborative process where success depends on the ability to create, share and integrate information at a very rapid pace. With globalization comes a growing need to foster highly productive software development teams that can operate successfully in this global market. Distance creates an additional challenge to development processes, as fewer opportunities exist for rich interaction and direct communication occurs less frequently.

Virtual team collaboration is the collaboration of teams that are not located in the same physical location. These teams could be either on-site, near-shore, offshore or a combination of the three types.

 

Whether dealing with teams collaborating in the same location or virtual teams across multiple locations, collaboration is key to a successful DevOps transformation.

 

 

DevOps is focused on improving the principles of collaboration including:

Voice of the Customer
Just in Time Requirements
Refinement
Social Interaction
Transparency
Demonstration
Fast Feedback

How to Collaborate

So how is collaboration best optimized within DevOps?

The key is to enable effective collaboration at the three following layers:

Team Collaboration: DevOps builds on the concept of small teams working together to achieve “great things.”

 

 

 

 

 


Team of Teams Collaboration:
A group of teams working in cadence and synchronizing often.

 

 

 

 

 


Intent/Idea Collaboration:
Alignment to ideas/concepts that have been identified, analyzed and approved for delivery.

 

 

 

 

 

With the challenges of collaboration, tooling to support the development teams becomes critical. Whichever tool is selected, it must have the ability to deliver transparent and effective collaboration for all three layers to truly be successful across the entire delivery life cycle.

Last Word

Ultimately, the improved collaboration afforded by DevOps Software Development leads to better reliability, more time to focus on the core business, faster time to market, and of course, happier clients.

Constructive deconstruction of subtyping

Alistair Cockburn -

Abstract

Reflection is the programmer’s tool for deconstruction. “Deconstruction”, a modern technique for literary criticism, involves using any available information about a text to help interpret it, even what is not said. Similarly, a programmer using reflection takes advantage of information about the argument passed in, rather than simply using it directly as a value. This poses a threat for type checking and what we typically think of as subtyping.

This paper shows that subtyping cannot be expressed as the binary relation betweem two types, subtype(S, T), (more accurately, Γ |- S<:T), but must be expressed with respect to the range of usage permitted in the operating environment, i.e., subtype(S, T, EP) (or [E]P |- S <: T). Making the environment a first-class entity lets us resolve certain conflicts in the literature. This paper identifies four categories of usage and shows how they resolve some conflicts in subtyping discussions.

The paper in brief

The purpose of this paper is to tie up a loose end. The loose end is that literature on subtyping discusses assertions, “S is a subtype of T” (written S<:T), as if such an assertion can be pronounced true or false for all programs. Most authors assume that, in principle, such an assertion is possible. Formal literature on subtyping carries a symbol, Γ , to denote the “environment” of the typing assertion (so, officially, Γ |- S<:T) . But Γ is defined only over the variables of the environment. The loose end is that Γ should be defined also with respect to the power of the environment, the sorts of activities that can take place against the type, possibly beyond those defined in the types themselves (e.g., reflective operations). I use the letter E to refer to this more general sense of environment.

Without tying up the loose end, we find seemingly conflicting or incorrect assertions in the literature. Reexamining the assertions in the context of the operating environment often sorts out the contradictions. Different authors, offering opposite pronouncements of what is a valid subtype, are merely speaking to different environments.

The simplest environment is one in which only the static representation of the typed item is important, not the operations. The next two differ by whether one or more than one client might be invoking operations on the item at one time. The fourth is the reflective environment, an environment that ruins standard type systems.

The matter is merely a “loose end”, in the sense that most researchers into type systems understand the environment for which they are writing. It is a significant loose end, though. Many researchers really do believe that S<:T can be evaluated once, for all programs. Some programmers give up on the idea of subtyping, because they work in a more powerful environment than their typing system is equipped to handle. And it is difficult to sort out the conflicting messages in the literature.

In this paper, I use two running examples. The first is the old riddle, “Is a circle a subtype of an ellipse?” and the second is, “Is a working person a subtype of a person?” The answer I give is an agressive form of “it depends”:

Those questions cannot be answered as phrased (this should not be a surprise to the reader). They cannot be answered when given the abstract definition of all items. In fact, they cannot be answered even when given concrete implementations in any mathematical or executable language (this should be a surprise to the reader). They cannot be answered except in the context of an interpreting environment, where the usage of the items is known.

What are typing and subtyping?

To get the discussion off the ground, we need to look at the different intents and definitions for typing and subtyping.

Typing

For typing, it suffices to scan the literature:

The primary goal of a type system is to ensure language safety by ruling out all untrapped errors in all program runs. [The] declared goal of a type system is usually to ensure good behavior of all programs, by distinguishing between well typed and ill typed programs. [Cardelli]

“A program variable can assume a range of values during the execution of a program. An upper bound of such a range is called a type of the variable.” [Cardelli]

Types are linguistic expressions defined thus: (i) each type-variable is a type (called an atom). (ii) if σ and τ are types, then (σ → τ ) is a type (called a composite type). [Hindley]

1. Atomic types T1, ..., Tn are types
2. If U and V are types, then U x V and U->V are types.
3. The only types (for the time being) are those obtained by ways 1 and 2. [Firard]

A type is the range of significance of a propositional function. [Bertrand Russell, from his doctrine of types, 1903 [Sambin]]

A type is well defined if we understand… what it means to be an object of that type. [Lof, from [Sambin]]

In the terms that will be developed in this paper, Hindley and Firard have restricted themselves to “function application” environments in their definition. The generous and informal definitions of Cardelli and Lof will carry forward to the the reflection-based examples discussed later.

Subtyping

There are two schools of thought as to the purpose of subtyping: substitutability, and safety.

Substitutability: My program behaves the same when I pass in an argument that is declared as a subtype. [programmer A]

Safety: My program, once type-checked, still will not issue any message that is not accepted by its recipient or misintepret any set of bits, even if I substitute subtypes. [programmer B]

or

The main goal in safety preservation is ensuring that all objects in a system maintain consistent states [Lea]

These two schools are reflected in the technical literature:

What is wanted here is something like the following substitution property: if for every object o1:S there is o2:T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is substituted for o2, then S is a subtype of T. [Liskov]

An element of a type can be considered an element of any of its supertypes….if a term has type A, and A is a subtype of B, then the term also has type B. [Cardelli] (whose intent is still to detect ill-behavior.)

Different authors have different recommendations as to how to achieve these goals (not all authors state which goal they are after).

Class C is a subtype of a class D, ..., if there is a total map φ : methods© -> methods(D) and a data refinement relation R on c;d such that the theory Γ D of D extends the translation of the theory Γ C of C under φ and R. (...) Initializations and invariants may be strengthened, preconditions weakened and postconditions strengthened, all modulo the refinement relation R. Aliasing is a major problem in preventing modular reasoning about a class… there is no simple solution to this problem. [Lano]

The terms type, subtype and supertype, unqualified, refer to scalar types specifically… T and S are used generically to refer to a pair of types such that S is a subtype of T. Every value in S is also a value in T. Subtypes are subsets. A subtype has a sub set of the values but a super set of the properties. Type inheritance implies (value) substitutability. [Date] (for consistency in this paper, I substitute S for his T’)

From these definitions, we get surprisingly different recommendations as to what actually counts as a valid subtype. Let us take a look at some of the conflicting statements.

The running examples

Is Circle a subtype of Ellipse?

Date says, “yes”, because, “every possible representation for ellipse is – necessarily albeit implicitly – a possible representation of circles, too.” [Date]

According to Lano’s definition, we cannot tell without looking at the actual definitions of Circle and Ellipse. So let us look at two definitions.

Let Ellipse1 support three functions: major, minor, area, which return the length of the major axis, minor axis, and area, respectively. Circle1 has the same operations, but minor and major always return the radius.

For Lano to evaluate the subtyping, we need to construct the data refinement relation R and the mapping of functions φ . In this case, they are trivially established, and so we conclude that Circle1 is a subtype of Ellipse1. According to theory, for every program P, we should be able to happily use an instance of Circle1 as an argument wherever Ellipse1 is declared.

It turns out that this is a wrong belief, as we shall see.

Let Ellipse2 supports two more functions, major(x), minor(x), setting the major and minor axes. Circle2 supports only major(x). The mapping φ cannot be established, quite appropriately, because a Circle2 cannot be asked to change its aspect ratio. Therefore, Circle2 is not a subtype of Ellipse2, according to Lano’s definition.

It turns out that this also is a false belief, as we shall see.

What else?

Is ColoredCircle a subtype of Circle? Suppose ColoredCircle is just like a Circle, but has one more attribute, the color.

Is WorkingPerson a subtype of Person? Working person has additional attributes over Person.

Is 25SecondStoplight a subtype of UnspecifiedDurationStoplight? The UnspecifiedDurationStoplight has an attribute for its times, but no defined value for those attributes. The 25SecondStoplight has those values defined.

Is VariablePolygon a subtype of Polygonal? Suppose a type Polygonal is defined with a number of sides that stays constant over the lifetime of an instance. VariablePolygon is defined so that the number of sides can be changed over the lifetime.

Disagreements about subtypes

Now we encounter the surprises. Lano says that a WorkingPerson is a subtype of Person because it has “new attributes not logically linked to existing attributes”, and of the 25SecondStoplight, “the resulting class is a subtype” of the UnspecifiedDurationStoplight.

But Date disagrees. He consider the code fragment:

Circle c; ColoredCircle cc; c := cc;

and notes that suddenly the variable c has lost some information, the fact that it had a color. After lengthy consideration he writes the surprising sentence, “we find the idea of colored_circle being a subtype of circle somewhat suspect anyway, because it was not defined through specialization by constraint”. He makes the assertion:

...if S is a subtype of T, then subtyping should always be via S[pecialization] by C[onstraint]. ... a value of type T is specialized to type S precisely if it satisfies the constraint P’. Thus we see S[pecialization] by C[onstraint] as the only conceptually valid means of defining shapes. And we reject examples like the one involving COLORED_CIRCLE as a subtype of CIRCLE… [Date], ( again I use S where he uses T’, bold type is his )

I shall put Date’s comments into perspective shortly. For now, let us shift to the second surprise, the idea that VariablePolygon might not be a subtype of Polygonal. We can provide the mapping required by Lano, so what could be the problem?

Lano considers “the aliasing problem”, the situation in which multiple client objects are using the same Polygonal object, and finds

...an object ob:Polygonal to satisfy its specs could observe state changes to ob which were not explicable on the basis of specification if ob was shared with another client who applied the add_edge operation.

The VariablePolygon situation above jeopardizes the safety goal of subtyping as well as substitutability.

I resolve these puzzles shortly. The point for the moment is to illustrate that there is disagreement and some confusion in what seemed to be a simple discussion. The next section takes reflective programming and deconstructionism into account, to take that confusion to its logical extreme.

What’s deconstruction got to do with it?

Deconstruction is to literature what reflection is to programming (using deconstruction to connote a set of related techniques of modern literary criticism). I borrow from the history of deconstructionism to peek ahead at possible futures for reflective programming and type theory, and to construct a safety valve for subtyping.

Jorge Borges presages deconstruction in his fiction, “Pierre Menard, Author of the Quixote”. In this story, the contemporary authors Pierre Menard sets himself the task of writing a story that is letter-by-letter identical to Cervantes’ Don Quixote, but created freshly and totally out of Menard’s personal, artistic experience in 20th century France instead of Cervantes’ in centuries earlier Spain. Borges considers what it means that a piece is thought to be written by someone else, and concludes with the prescient words:

Menard… has enriched … the halting and rudimentary art of reading… This technique, whose applications are infinite, prompt us to go through the Odyssey as if it were posterior to the Aeneid… To attribute Imitatio Christi to Louis Ferdinand Celine or to James Joyce, is this not a sufficient renovation of its tenuous spiritual indications?

Deconstruction is indeed a bottomless technique. The reader is allowed to consider any information at all about the piece, including surmises as to what the author intentionally or accidentally left out. It is always possible to invent something new to relate the piece to.

Similarly, a programmer using reflection can examine any and all information about the arguments being passed in to a function: the name of the class, the language, date of origin, naming style, etc., things that are not normally considered part of formal type definition and subtyping. The client can fiddle with the textual description of the type or class definition itself. This programming trend is happening at the same time that type theorists are tightening the rules of type compliance.

Programmers and type theorists are facing what Bloom and Terjera wrote about literature, in 1975 and 1995:

Literary meaning tends to become more undetermined as even as literary language becomes more over-determined. ...there are… only relationships between texts. These relationships depend upon a critical act, a misreading or misprision, that one poet performs upon another and that does not differ in kind from the necessary critical acts performed by every strong reader upon every text he encounters. [Bloom]

But language as utterance… cannot be fully understood in separation from the human producers and users of it. [Terjera]

A subtype is not a subtype in a reflective environment

The programmer examining the name of the argument, or the definition of the type itself, bypasses the standard mathematics of type theory. If the code contains:

return ( argument.getClass().getName().equals (“T”) );

then nothing else can be a subtype of T, no matter what the Lano mappings are!

Let us consider Circle1 and Ellipse1. It was easy to provide Lano’s mappings, and so we assert

Circle is a subtype of Ellipse, ( Γ |- Circle<:Ellipse )

We then encounter someone’s program, in which is written:

class Client { public boolean useable (Ellipse1 ellipse) { return (ellipse.getClass ().getName ().equals (“Ellipse1”)); } };

Our Circle1, proven to be a legitimate subtype of Ellipse1, cannot be substituted in the above code. In fact, nothing except for Ellipse1 can be substituted in the above code. To put it in the form recommended by this paper,

in the environment of the client code, Ellipse has no subtypes.

If the above code appears in program P, then,

( ¬ [E]P |- Circle1 <: Ellipse1 ).

The type definition has been “deconstructed”. The act of deconstructing it violated the assumptions of usage of the original subtyping assertion.

From a stylistic perspective, we might like to argue that the above code shows “poor coding style”. However, from a mathematical perspective, it is a legal code fragment and must be taken into account by type theory. Further, it is not only legal, but the kind of code that quite a large number of programmers will write. Further still, we can motivate that there are circumstances where similar code is not only good style, but appropriate, natural and desirable.

A growing number of systems are written to take advantage of reflection, often during code development, test, and migration. On one reported project, every class added to the system within the project is given a significant prefix. The migration harness code checks each class’s prefix to decide how to upgrade the class.

In a second example, the reflective Java code below searches all classes for all functions that start with the prefix “test”. They are automatically collected into a test suite, simplifying the life of the programmer.

private boolean isTestMethod( Method m ) {
:: String name= m.getName();
:: Class[] parameters= m.getParameterTypes();
:: Class returnType= m.getReturnType();
::: return
:::: parameters.length == 0 &&
:::: name.startsWith(“test”) &&
:::: returnType.equals(Void.TYPE) &&
:::: Modifier.isPublic(m.getModifiers());
}

This test-suite generation code illustrates a more refined use of deconstruction, one that “refers to that which it includes”, and, by consequence, also “refers to that which it excludes”, a practice of literary deconstruction:

As an interpretive practice, deconstructive readings follow the motifs or metaphors that govern a text, seeing where they lead and what these motifs exclude, or, as the jargon puts it, “marginalize” or render unimportant in favor of the overt thesis of the text. A deconstructive strategy then reads that excluded component back into the text, demonstrating that the language of the text, practice, or institution must simultaneously refer to that which it excludes. [Nordquist]

Some in the literary world found deconstruction left them with an empty, nihilistic feeling. As literary critics, they saw a loss of control

For deconstructors in philosophy, the multiple meanings are in part generated by the interpreter who reads the text; for the literary critic, the ambiguity is in the text itself and its language, grammar, and rhetorical mechanisms. To the literary critic, the reader/subject is no longer in control of the text/object… [Nordquist] (my italics)

I do not wish to see type theory annihilated, only rebalanced. I take the side of the philosopher in the above quote, not that of the literary critic. The ambiguity of types in not in the text itself, it is generated by the interpreter who reads the text. The subtyping relation is established by the “interpreter who reads the text”, the programmer (or program, depending on how one likes to talk about it). Still, it is not really the case that the type definition has no meaning at all.

For now, it is sufficient to recognize that it is neither practical nor desirable to define “subtyping” as a static relation between two type definitions.

Reconstruction

[Deconstruction] is an effort to reverse the ratio of writer to reader. To make the reader larger than the writer. I could say I find this disturbing… Those we call Reconstructionists are urged on by a hard sense of reality asserting itself. [Nason]

If we are not careful, we could end up with years of arguments about whether type definitions actually mean anything. Let’s fast forward the discussions about deconstructionism to see where they ended up. The answer is that one must eventually find some directly intended meaning of the writing.

We do not speak of poems as being… right or wrong. A poem is either weak and forgettable, or strong and so memorable. [Bloom’s Kabbalah and Criticism, quoted in [Nason]]

[Bloom] has brought back value into his assessment of texts, simply replacing such things as “good” and “true” with “weak” and “strong”. In so doing he reintroduces the notions of “good” and “bad” and “true” and “false” into criticism [Nason]

In type definitions, the writing does have intrinsic meaning, but the meaning is not absolute, it varies with the environment of use. It is not sufficient – it is not possible! – to make the blanket assertion: S is a subtype of T. Formally, we cannot hope to prove E |- S<:T, that S can be considered a subtype of T in all environments E, for interesting S and T. The properties of the execution environment matter too much.

Is Circle1 a subtype of Ellipse1? Yes, as long as the client only invokes the declared Ellipse1 functions. No, the subtyping relation is not guaranteed to hold in a reflective environment.

The authors of the Java testing code are counting on their clientele to be at least as smart as the average compiler, to understand and take advantage of their use of function prefixing. They made the prefix part of the (informal) type declaration. Their user community is able to understand the convention, is happy with the labor savings, and they write their test function on the assumption that the code will pick up the ‘test’ prefix.

If reflection breaks subtyping in certain ways, aliasing breaks it in other ways, as we have seen. So the presumed environment of usage is a first-class element of the subtyping relation. It is a necessity to assert: S is a subtype of T in environments E. Formally, that

[E]P |- S<:T

(“S can be considered a subtype of T in all environments E that use operational properties P”).

With this in mind, we consider Lano’s informal definition of a subtype as “a contract stating to any client of the class what behavior can be expected from any instance of any of the new subtypes of this class” [Lano]. This description fits the Java test code example, if we take the programmer as the client.

Four usage envelopes

Let us consider four categories of execution:

R – Static representation only, clients’ access to functionality is not considered.

S – Single-user, user clients access declared functions only.

M – Multi-user, user clients access declared functions only.

F – Reflective usage, user client can access the type definition.

With those four categories in hand, we can clean up the discussions of Colored_circle, Circle and the polygons.

Date and Darwen’s book assumes R usage most of the time. They are primarily concerned with database requirements for elements of the type hierarchy. Under the R conditions, they need not consider what behavior the types might be given. Therefore, value substitutability is sufficient, and behavioral substitutability is not needed. In the R environment, it is even true that Circle2 is a subtype of Ellipse2.

[E]R |- Circle2 <: Ellipse2, or, subtype( Circle2, Ellipse2, r )

Date and Darwen use Colored_circle as a valid subtype of Circle for most of the book. However, in Chapter 14 and Appendix C, they start considering the S category of environments (without explicitly differentiating the environments), and notice that “a certain update operator might be defined for type Rectangle but not for type Square.” Under the new conditions, they announce

It therefore does not seem reasonable to regard type CIRCLE as being a subtype of type ELLIPSE!

What they mean to say, is that

¬ [E]S |- Circle2 <: Ellipse2 ( not (subtype( Circle2, Ellipse2, s ) ))

Discussing Zdonick and Maier’s wish for substitutability, static type checking, mutability, specialization by constraints [Zdonick], they conclude:

inheritance applies to values, not variables.
Thus it seems to us that the key to the problem is to recognize : ...
* The logical difference between The Principle of Value Substitutability and The Principle of Variable Substitutability.

which is their way of separating R and S environments.

In the example of the variable subtype of Polygonal, Lano was interested in subtyping that preserves behavior. As he points out, his substitution mapping works as long as there is only one client using a type instance. A polygon with variable number of sides might be substituted, and the client would be none the wiser, because it would not consider invoking any of the functions of the variable-sided polygon. In other words,

[E]S |- VariablePolygon <: Polygonal

a VariablePolygon is a subtype of Polygonal in an S environment, but

¬ [E]M |- VariablePolygon <: Polygonal

a VariablePolygon is not a subtype of Polygonal in an M environment, because things get difficult with multiple client objects pointing to the same polygon. If one client thinks it has a polygon with constant number of sides, and the other knows of it as a variable-sided polygon, then there is trouble. The second client may change the number of sides and surprise the first. As he writes:

[A]liasing is a major problem in preventing modular reasoning about a class… there is no simple solution to this problem. [Lano]

In terms of this paper, Lano has an workable definition of subtyping for S environments, but not for M environment. Other researchers are working on subtyping in M environments, e.g., [LiskovWing] and [Clark]. In their research, they mention that they are addressing multi-user aliasing, but they do not recognize and separate the different subtyping claims that the different environments permit.

None of the given subtyping definitions work in a reflective environment, as we have seen. The surprise is that, in a reflective environment, we cannot even guarantee substitutability of Circle1 for Ellipse1:

¬ [E]F |- Circle1 <: Ellipse1 (not (subtype( Circle1, Ellipse1, f ) ))

Once the client has the ability to deconstruct the type, it is no longer the case that we can guarantee – without looking directly at the client code – that an instance of S can be substituted for T. In fact, it becomes hard to say just what a type is in the reflective environment.

In many cases, the programmer using reflection is careful to describe and use the reflection consistently, so that the other programmers know what to expect. As Lof wrote,

A type is well defined if we understand… what it means to be an object of that type.

In the Java testing framework, the other programmers understand that test methods are supposed to start with “test”, and write the programs that way. They understand what it means to be an object of that type, fitting Lof’s definition.

Epilog: Local subtypes

It may prove useful to consider “local subtypes”, in order to simplify aspects of program verification. A person checking a program may discover that S is not a subtype of T in the absolute sense previously sought, but is a subtype of T in all the uses in program P. They would prove the weaker, but sufficient assertion:

[E]P |- S <: T

(S passes as a subtype of T in programs P)

That proof might be considerably simpler, and of value in an industrial setting. In an industrial setting, the program set P is likely to change it characteristics over time. Therefore, it may only be practical to carry out the proof if the proof can be done fully automatically, and to make that happen, it may be necessary that the proof is simpler.

Local subtyping can even invert intuitive subtyping (as if they are not being turned upside-down already!). Examining a Queue and a Stack as used in program P, we might demonstrate

[E]P |- Deque <: Stack

(This implementation of Deque passes as a subtype of this implementation of Stack in program P.)

which is the reverse of the usual.

Summary

Starting from a deconstructionist view of the view, we saw that

  • Subtyping is not as simple a construct as it might have been thought to be. Trained authors contradict each other and even seem to contradict themselves.
  • The usually announced goal of subtype analysis, to guarantee substitutability without examining the client code, fails in the context of reflective programs. The reflective client can examine everything about the argument passed in to a function, including its name, and even the prefix on the name.

The disagreements between the authors and the damaged goal of substitutability can be resolved by enriching consideration of the environment, making it an intrinsic part of the subtyping relation. Four categories of environment were named:

  • R- the client will make no use of the behavior of the types involved. Data subtyping definitions apply here. For S and T, prove subtype( S, T, r ).
  • S – only one client uses an instance at any one time, and only accesses the declared functions. Standard functional subtyping definitions apply here. For S and T, prove subtype( S, T, s )
  • M- multiple clients may use an instance at any moment, but only access functions declared for that type. Substitutability guarantee is unsolved as yet. For S and T, prove subtype( S, T, m )
  • F – client may use reflection. Substitutability of one type for another cannot be guaranteed, since the client may require the name of the instance to be a particular string. The assertion subtype( S, T, f ) cannot be proved in general.

Placing seemingly contradictory examples into the above categories resolved the contradictions.

Finally, it is possible to consider simplifying substitutability proofs by narrowing the environment of usage to a single program P, and proving only subtype( S, T, P ).

References

Abadi, M, Cardelli, L., A Theory of Objects, Springer, 1996.

Bloom, H., A Map of Misreading, Oxford U. Press, 1975.

Borges, J., “Pierre Menard, Author of the Quixote”, in Labyrinths, Modern Library, 1983.

Cardelli, L., “Type Systems”, Handbook of Computer Science and Engineering, chapter 103, CRC Press, 1997, also at
http://research.microsoft.com/Users/luca/Papers/TypeSystems.pdf

Dasenbrock, R., Redrawing the Lines: Analytic Philosophy, Deconstruction, and Literary Theory, U. of Minnesota Press, 1989.

Date, C., Darwen, H, Foundations for Object/Relational Databases, Addison-Wesley, 1998.

Firard, J-Y, Taylor, P., Lafont, Y., Proofs and Types, Cambridge Univ. Press, 1989.

Hindley, J., Basic Simple Type Theory, Cambridge University Press, 1997.

Lano, K., Formal Object-Oriented Development, Springer, 1995.

Lea, D., Concurrent Programming in Java, Addison-Wesley, 1996.

Liskov, B., in “Data Abstraction & Hierarchy”, OOPSLA ‘87.

Liskov & Wing, “Family values: a behavioral notion of subtyping”, Technical Report CMU-DS-93-187, School of Computer Science, Carnegie Mellon University, 1993.

Liskov, B., Wing, J.. “A behavioral notion of subtyping”. ACM TOPLAS, 16(6):1811-1841, November 1994.

Mitchell, J., Foundations for Programming Languages, MIT Press, 1996.

Morningstar, C., “How To Deconstruct Almost Anything”, http://www.fudco.com/chip/deconstr.html

Nason, R., Boiled Grass and the Broth of Shoes, McFarland & Company, 1991.

Nordquist, J., Deconstructionism: A Bibliography, Reference and Research Services, 1992.

Sambin and Smith / Loef, 25 Years of Constructive Type Theory

Tehera, V., Lieterature, Criticism, and the Theory of Signs, John Benjamins Publishing, 1995.

Zdonick, S. and Maier, D., “Fundamentals of object-oriented databases”, in Readings in Object-Oriented Database Sys, Zdonick, S. and Maier, D., eds., 1990, Morgan Kaufmann.

DevOps Culture and the Informed Workspace

VersionOne Agile Management Blog -

 

 

 

 

 

 

 

While the DevOps culture has been heavy focused on what tools to use, little thought has been giving to what type of workspaces is needed. Ever since the early days of agile, the importance of an informative workspace has been known. Many of the practices around working together, pair programming and the Onsite Customer from Extreme Programming were to enable the rapid flow and visualization of how the team is doing. Other aspects, such as the Big Visual Chart, were included to keep the information flowing. We have made great progress in this category, but we still have more to do.

Now, fast forward to the DevOps Culture. Much like the original agile movement, DevOps is a relatively unique change in the world of work that involves both cultural and technical shifts. It’s just not enough to have cool new tools like Puppet and Chef, or any of the other cool tools that make continuous delivery “a thing.” We need to be able to think about how we are planning our stories. We need to be able to be including acceptance criteria that go beyond just “is it done,” but also all the way to “is it staying done?”

We often run into this as agile consultants. I have often gone to work with a client and their number one concern as we are going through the engagement is “ok, and what do we do on day one of the sprint when we don’t have you here coaching us?” Now, they are fine on their own, but part of the plan is to know what to do after the training wheels are off. Let’s look at that same idea in terms of DevOps Culture. Stories have a very limited life. Once the product owner has accepted the story, we tear it up and throw it away, metaphorically speaking. But that isn’t the end of the story. The software now needs to live and breathe in the big wide world. How do we do that? DevOps is of course the answer, but what exactly does that mean?

Workspaces in the DevOps Culture

As mentioned earlier in this article, the idea of an informed workspace is a valuable tool in our belt for moving deeper and wider into the DevOps culture. Think back to one of the biggest cultural changes called for in the early adoption of agile. Agile called for bringing QA into the room. We aren’t treating QA as a separate team, but part of the team. All of a sudden we are paying close attention to the number of tests that are passing. So now part of our Big Visual Chart is focused on the pass/fail rate of our tests, not just the status of the story itself. This shift took a lot of effort, and I think that if we are honest with ourselves it’s not done yet. But that is an article for another time. We want to take a deeper look at the keys to successfully affecting a further transformation to the DevOps culture. What aspects will really help us do more than just have lots of automated builds that we call done, with no thought to what happens after?

The first step is to think about the cultural changes required. What is it that we will need to change in our thinking in order to make DevOps more than just another buzzword at our shop? The first, and hardest, change is to stop thinking in terms of the “DevOps team”. The whole team is part of Dev and Ops. There just is no wall to throw “finished” product over anymore. It’s all about creating great and long lasting software. There are many steps that we have already taken to get there, but this is one of the biggest. So let’s take a look at the different activities that really make a DevOps culture thrive.

DevOps Activities

Of course, the first thing one thinks about when discussing DevOps is the activities that support continuous delivery. This means an even higher need for all tests to be automated. Unit tests are merely the start, followed closely by automating your acceptance tests. Having these tests running continuously throughout the day is basically the cost of entry into the DevOps culture. Without a strong continuous integration server, running tests all day and every day, we just can’t be sure that what we are releasing is of a high enough quality to stay healthy in the real world. After that, the art of continuous deployment becomes an additional challenge. Orchestration tools are vital to make sure the right bits get bundled with the other right bits, and then get put where they belong. And then, since we are all part of “keeping the lights on,” we need monitoring tools to help us visualize whether our software really is behaving properly. So yes, there is a definite technical aspect to DevOps.

That’s a lot of moving parts! We need to keep track of where we are. This leads to one of the cool parts of a true DevOps implementation. All those cool monitors that the Ops guys get with the fancy graphs and uptime charts? They come into the room with the Ops people. And we need to add to them. We are going to track story progress from idea to implementation, and then into the wild. My acceptance criteria are going to include things like “must have Nagios hooks” and “will use less then x% of CPU”. And now we have to live up to it. This means it is more important than ever to be able to visualize the entire flow. Our Big Visual Charts need to be able to show us not just how the current iteration is going. They must show us the state of the build server, the state of the various builds and where they are in any of the extended process, such as UAT, etc. And, in the unlikely event of a failure anywhere along the line, or in post-production, we can follow a clear chain of events back to find the problem quickly.

Conclusion

So now we see that, while DevOps is primarily a people problem, there are a lot of technical aspects that enable a strong DevOps culture. The key to success is the union of the people and technical aspects, which in a way make DevOps a Cyborg. In order to balance these two aspects, and in order to keep ourselves from burning countless hours and countless brain cells chasing down all of the moving parts, we need to focus on Information. The more Information that we can have at our fingertips, the more effective we will be. Each team will identify which information is the most meaningful to them, and how best to interpret it. You can bet that this will be in live charts rather than stale reports. Being able to orchestrate the entire flow of a story’s life, from inception to realization to retirement will be much easier if we can visualize each step of the way. If this means our team room might start looking like the command center in war games, what’s so bad about that?

About the Author

Steve Ropa
CSM, CSPO, Innovation Games Facilitator, SA
Agile Coach and Product Consultant, VersionOne

Steve has more than 25 years of experience in software development and 15 years of experience working with agile methods. Steve is passionate about bridging the gap between the business and technology and nurturing the change in the nature of development. As an agile coach and VersionOne product trainer, Steve has supported clients across multiple industry verticals including: telecommunications, network security, entertainment and education. A frequent presenter at agile events, he is also a member of Agile Alliance and Scrum Alliance.

Know Your Customers: They Can Help You Write Better User Stories

Agile Connection -

Too many user stories begin, "As a user …" Who is your user? Or, more accurately, who are they? Improving your understanding of the types of customers who use your software lets you see multiple products where previously, there was only one—and identifying dedicated products will help you prioritize and accelerate delivery.

Five Tips for Improving Communication

VersionOne Agile Management Blog -

Communication is the key to solving problems and successfully collaborating, but many of us still have difficulty communicating with particular team members. Why?

Because the words we use mean different things to different people in different contexts.
Matt Badgley, an agile product consultant at VersionOne, recently gave a presentation at Agile Day Atlanta about communication techniques you can use to solve problems and improve team meetings.

VersionOne: Why is it important to focus on the words we use?

Matt: We all know that collaboration is the key to success. Ultimately, solving a problem is generally done by people talking to each other and working things out. Solving problems often happens inadvertently, through conversations.

So that’s why communication is key, and communication is made up, of course, of verbal and nonverbal cues. The same goes for the role of ScrumMaster. So, if you are in the role of product owner or ScrumMaster and you’re not good at facilitating communication, you are not going to be successful. So that’s why it’s really important.

When you actually talk about what words mean, you will find that certain words in certain organizations trigger emotions. They are bad words. They are basically four-letter words that are emotional for people. So you have to be aware of that. You will also find that there are some terms that mean one thing in one context and something totally different in another context. For example, epic is a word we use all the time in agile. And even the word project means different things, and it actually evokes different feelings in people.

VersionOne: In your presentation you shared some fun facts about communication – can you share those with us?

Matt: One of the most interesting statistics is that women speak roughly twenty thousand words per day on average, while men speak on average seven thousand words per day, and we all have around twenty-five thousand words in our active vocabulary.

Generally, we say between one hundred to one hundred and seventy-five words per minute, although we can listen to up to eight hundred. That is why we can often eavesdrop on other people’s conversations and gain insight. Our conscious minds can only process about forty bits of information per second, which includes colors and things like that. However, our subconscious mind, which deals with our motor skills, processes around eleven million.

One last little fun fact: the word that has been shown through studies of the brain to be the most dangerous in the world is the word no – probably because we learn that word at a very early age and get our hands slapped. So if you say no in a conversation, that instantly turns the context of the conversation around, or changes the tone. This just goes to show that the actual words we use are often undervalued and can mean so much more.

VersionOne: What are some of the ways you suggest for people to solve that problem?

Matt: In my presentation I make five suggestions.

1) Don’t redefine the obvious.

For example, when talking about requirements, we often use the word feature or capability. Now the scaled agile framework refers to requirements as a business epic or a feature epic. You’ll hear different terms that people throw out, just simply to change the term. So, be very deliberate on whether or not you need to change a word or not.

2) Be deliberate and intentional.

If you make the decision to change a term, be deliberate and intentional about using it. For example, the Spotify model uses the word squad rather than team. Squad makes you think of the military or a small group that is a subset of a sports team. A team is a bigger composition, but a squad is a smaller and more intentional group of people. By redirecting and changing people to use that term, it has some underlying meaning that’s beyond the word team.

3) Be aware of biases around a word.

Bias is a preconceived feeling around certain words. A funny one to use is the word ScrumMaster. The term master has some bias behind it, some predefined bias that people bring into the room with them. It’s not always perceived how it is meant to be, although ScrumMaster does actually mean the master of the scrum process, the sensei. At the end of the day, that bias can be dangerous. So be aware of the bias.

4) Use domain language.

Use the words that the business uses already. This suggestion goes with number one: don’t redefine the obvious, but also don’t go out of your way to use a word that’s not unique to your industry. Accept and embrace some of the acronyms that are associated with the industry. For example, in the agile industry, we use the term product owner and sprints, so embrace those kind of words.

5) Use visual elements when defining a glossary.

It may sound strange to create a visual glossary, but the idea comes from how we learned words as kids. You learned the word apple because you saw a picture of an apple. Defining ways in which people can not only read the word, but also visualize the word helps things stick.

Check out these posts to learn more about how you can improve your communication by focusing on what words mean.

The One Thing You Must Do this Morning to Achieve Your Goals

VersionOne Agile Management Blog -

 

 

 

 

 

 

 

We all have goals, yet many of us don’t meet them. Sometimes this is disappointing, but more often than not the bar just kind of moves down, time passes, and we become complacent.

Why?

Because it is pretty tough to stay focused for long periods of time on “Big Hairy Audacious Goals”.

So what can you do to meet your goals?

Just like it doesn’t work to wait till the end of a release to power through the requirements in a traditional waterfall, it doesn’t work to wait till the end of the day to power through your tasks. Instead of powering out of the day, focus on powering into your morning.

I once heard a CEO tell his company a story to motivate them. It was January and the year was just kicking off. He said “If we want to meet our annual goals, we have to meet our quarterly goals. If we want to meet our quarterly goals, then we have to meet our monthly goals. In order to meet our monthly goals we have to meet our weekly goals, and to meet our weekly goals we have to win each day.” He said to forget everything else and just win the day. He then went on to explain the origin of the mantra.

If you follow college football, you may be familiar with the Oregon Duck’s mantra “Win the Day.” Most football coaches at whatever level preach about just focusing on the next game and not looking past it. When Chip Kelly began coaching at Oregon, he began preaching for players to just “win the day” because he believed the team needed to strive to win at every sprint, study session and scrimmage in order to win the next game.

This story and mantra resonated with my agile roots. Agile is rooted in focusing on the day to achieve the end goal. Agile’s foundation is built around not over planning to meet the final goal, but taking each day as it comes and responding accordingly. Stand-ups are all about teams winning the day.

Truthfully, as inspiring as this story was, I had come to forget about it till the other day when I was working out at the gym and listening to a podcast in which an author was discussing their book The Miracle Morning. The book is about six practices people can do every morning to be more productive, successful and happy. During the interview, the author started talking about the “how to win the day” mantra and how, if you want to win the day, you have to win the morning.

This snapped me out of my workout. On the surface, it would seem that the author just took something successful and took it one level down, but to me it was brilliant and once again aligned to my agile roots.

At work we are so often bombarded by firefighting that we fail to focus on our original goals or tasks. Agile, of course, employs story cards and stand-ups to combat these and other distractions. Most teams hold their stand-ups in the morning to kick the day off right.

Despite all of this, I had never thought about these practices being designed to help us win the morning so that we would win the day and ultimately win the end goal, but that is what they do. So, now I want to think about other things our teams can do first thing in the morning to ensure that we win the day – because to win the day, you have to win the morning.

Failure Modes of an Agile Transformation: Congruency

Rally Software - Agile Blog -

To this point, I’ve covered topics around failure in leadership and failure in workflow. It’s now time to dig a bit deeper into the question, “How does your organization “show up?” That is, “What’s the overall sense of how people take ownership for their behavior in the transformation?” What healthy alignments emerge among the teams? When a leader chooses to ignore the importance of behaviors and relationships, I refer to this as a failure in congruency.

What do I mean by congruency? To take a geometric perspective, envision polygons that have similarities in the number of sides; their angles also align with one another. In this, they are congruent. They can turn or flip or rotate and remain congruent. In fact, they need not be in the same location to have this attribute of congruency.

 (photo: Flickr CC)

In our human world, congruency evidences itself through changes in behaviors across the team. Congruent team members move away from a “yes/no” “black/white” “us/them” mentality. Congruent teams abide by norms in which pathological (yes I said pathological) behaviors are not acceptable, because relationships matter. The pathologies of blaming or placating are replaced with an emphasis on equal stature and equal voice. In environments of congruency, each member is heard, understood, and valued.  

Of the 12 failure modes in Agile transformation, consider failures 7, 8, and 9 as failures in congruency.

7. Lack of a Transformation Product Manager

Imagine your transformation as a product. The product you create is not one that is “in” your business; that is, it is not software or a service you would sell. Rather, this product is one that works “on” your business. As such, it requires the discipline we’d ascribe to product development. You need to identify a “Transformation Product Manager” to be your scout leader in delivering a high-quality transformation. Have this person work in a tight relationship with the executive owner of the transformation. Together, they define the disciplined exploration and execution to deliver a world-class transformation. And, together, they are the models of congruency among all players in the transformation. They define the value of our various team polygons: different but equal.

Using language from Virginia Satir, think of congruent teams as “family” systems in which the whole matters. As you move through the bumps of your Agile transformation, your Transformation Product Owner helps the teams be attentive to the incongruent behaviors that can eat away at the sense of “us” and “among”. What behaviors are creating distrust or lack of safety in your transformation? If you walk around your teams and notice tendencies toward blaming, placating, distracting, or being overly focused on process and structure , you are smack in the middle of incongruency. Ignore these harmful behaviors at your own risk.

All the process in the world is not going to move your Agile transformation into a healthy, sustainable state.

In a healthy world of Agile transformation, an intention around congruency emphasizes, “How can we better behave as a whole system to bring about the best results?” Here’s a start: ensure your Transformation Product Manager has the vision and empathy to recognize the destructive, incongruent behaviors. Next, ensure there is a non-negotiable value of trust — not just within a team, but across teams. Incongruency will evidence itself through “us/them” behaviors. Remove confusion about what we mean by product ownership. Incongruent Product Owners focus on “what’s in it for me” (WIIFM). There is a protectionist attitude about their particular backlog and the teams that work on them. Blaming becomes their primary communication mode.

 (photo: Flickr CC)

Congruent transformations breed new types of Product Owners, who let go of this pathology around defending their product and product teams. Instead, they engage in conversations like, “Given the value that this product brings and its projected cost and value, what’s best for the overall portfolio?”

Investing in a congruent transformation opens critical dialogue around:

  • How the transformation impacts behaviors as well as processes and structures
  • Clarity of transformation goals in teams and across teams
  • The health of teams where behaviors such as blaming and placating, or a focus primarily on process and hierarchy are recognized as detrimental to the transformation
  • Intentional decisions about consistency of behavior not just standards and practices around process and metrics
  • Supporting the benefits of congruency over enforced enforced behaviors
8. Failure to Create Fast Feedback

 (photo: Flickr CC)

Did you know that Sir Isaac Newton never had an apple fall on his head? He was, however, the father of the fundamental law of cause and effect. Little did he know the impact his physics would have on software development in the 21st century. Through the Industrial Age into the Age of Information, we’ve been clinging to cause and effect in how we build our organizations and how we expect them to work. {Link: Frederick Taylor} and [Link: Henry Ford] took advantage of this principle too. At its core, the assembly line uses cause and effect to create sequential, predictable, repeatable processes. Feedback loops on quality were less important or non-existent compared to how many items came off the line at any given point.

But where are we now? The nature of our knowledge work is inconsistent with the predictable, sequential work Newton helped foster. Yes, gravity still exists in a congruent world (well, at least in mine it does) but there’s much more going on. This is particularly true in an Agile transformation.

The forms we take, the behaviors we bring, the knowledge we carry all impact how we can stay in congruency. And that means we need to embrace regular learning as a core practice in our work.

How would you know that your Agile transformation remains largely informed by Newtonian physics? Think about these behaviors:

  • Clinging to a strict sense of predictability for when feature work will be completed
  • One centralized organization deciding all standards and rules for every team at the start of the transformation
  • Large-batch delivery of feature sets
  • Holding onto the belief that precision in analysis can resolve all risks in product delivery
  • Lack of experiments to test cause-and-effect assumptions about time, effort, and value
  • Blaming between business and development about delivery predictions and actual dates to support projected value
  • Blaming between development and testing about defects long after the features have been built

And finally, my favorite indicator of incongruent behavior:

  • Failure to get feedback through retrospectives, or the retrospectives that do occur perpetuate cause-and-effect fallacies and pathological behaviors (blaming, placating, etc.)

Fast feedback is the unspoken hero of congruency. We seek feedback on:

  • Guesses
  • Value
  • Behavior
  • Risk
  • Culture
  • Agile practices

In sum, healthy Agile transformations crave fast feedback on every aspect of how our the transformation is progressing. For this to occur, ensure you deliver feedback both ad-hoc and on a cadence, the latter being more formal and facilitated. The ad-hoc feedback reduces the waste of waiting for direction on very transactional decisions; the cadenced retrospectives ensure regular inspect-and-adapt sessions across the organization.

9. Short-changing collaboration and facilitation

We humans forming teams are constantly playing with the balance of how to be a team member and now to remain an individual. How can I speak up, be valued, and not have fear of recrimination while at the same time working toward the good of everyone? This is where some sense of congruency can help.

Recall that our congruent polygons are not identical. Rather, they hold similar characteristics such that you could recognize them as belonging to the same “polygon tribe.”

Can you say this about your teams? A few things occur when we inadequately support team interactions through facilitation and collaboration: we lose a sense of trust. Our teams end up fighting the gravitational pull of artificial harmony, low standards, and inattention to results, all due to a fundamental absence of trust. (See Patrick Lencioni’s The Five Dysfunctions of a Team to dig more into this pyramid of incongruent behaviors.)

Think about this: using Virginia Satir’s language, if we invite behaviors that gnaw away at the core fiber of a team, people move into modes of behaving that do not bring out their greatest insights. When this occurs, our dynamic is self-destructive. Why? Because we are only as smart as the least vocal person in a team.

How do we hold our insights dear and precious and necessary? We must seek a core team belief that collaboration makes us greater. And to collaborate, we recognize the value of objective facilitation. The work of the facilitator guides a team of individuals to decisions that integrate diverse perspectives in order to converge on actionable decisions. Good facilitators devote themselves to bringing out the best in the team. They do so by addressing incongruent behaviors and creating divergence and convergence processes, to safely buoy the team to sustainable decisions.

 (photo: Flickr CC)

Be clear with yourself and your teams. Collaboration does not mean groupthink, despite what people may infer. Rather, we are explicit and intentional about when to bring voices together for the greater good of the team. These voices can disagree. And we need them all so that we uncover risks, opportunities, puzzles, and surprises. Armed with this knowledge, teams can bring this caution into their commitment and move forward with their work.

Jean Tabaka

Pages

Subscribe to Torak - Agile Scrum Kanban Coaching Training Phoenix AZ aggregator