Data Science In the Balanced Team

ian-huston-data-science-in-the-balanced-team-0This year I was fortunate enough to speak at PyCon Ireland 2016 in Dublin.  This was a great event with lots of interesting Python based talks and a full PyData track over the two days. The topic of my talk was something I’ve been thinking about a lot over the last few years: how data scientists can work with other disciplines.

Recently designers and product managers have begun working more closely with development teams, and in my opinion there are many lessons that data scientists can learn from this experience. In particular the concept of a “Balanced Team” appeals to me as a template for data scientists.

The slides for this talk are on SpeakerDeck, and the video is also available. In this post I want to recap my argument from the talk with some extended notes.

From Imposter Syndrome to Team Player

ian-huston-data-science-in-the-balanced-team-1I work as a data scientist at Pivotal Labs helping clients, often large enterprises, to bring data science into their business. However, I really started working with data when I was an academic, handling results from numerical simulations of the early universe.

David Whittaker's Imposter Syndrome post

David Whittaker’s Imposter Syndrome post

For a lot of people in academia, the concept of imposter syndrome is very familiar, and for me academia was a long process of dealing with imposter syndrome. This is the idea that you aren’t really good enough to be here and someday someone is going to figure it out. This post by David Whittaker captures what is really happening. Though you may think everyone knows more than you, really you are just observing the combined knowledge of a lot of different people.

As an academic it was easy to think that others in my field or outside in industry must be handling these data problems in a better way. I had done some formal computer science training in my undergrad degree, but I’d taught myself how to use scientific Python tools and software carpentry practices.

When I left academia to work as a data scientist my first steps were working on solo projects where I was often expected to be a data science unicornThese type of projects involve a lot of pressure and the full weight of stakeholder expectations rests solely on you. It’s not a comfortable position to be in. Due to the hype around data science there was very little understanding by business stakeholders of the exploratory nature of much data science work, where positive results are not a guaranteed. 

[By the way, the Data Science Unicorn is a real account, with a collection of data science learning material gathered by Jason Byrne.]

ian-huston-data-science-in-the-balanced-team-4Working solo on projects is very draining, so more recently I’ve been fortunate to find myself working at Pivotal in teams including developers, product designers and product managers. We opened our Dublin office last year on Back To The Future Day, hence the branded De Lorean, and we are always looking for people with empathy to join us.

Working as part of a team has been great, and I’ve been able to learn a lot about how modern software is built. In particular I’ve been interested in how disciplines like design and product management have been integrated with more traditional development including a concept called the “Balanced Team”.

Balanced Team

The idea for Balanced Team came from conversations between developers and product teams who had been working in an agile methodology but were seeing problems integrating design and product management. As I understand it, the main idea behind Balanced Team is to share responsibility between the team and make sure everyone is acting in service to the team, not just their own self interest. Janice Fraser played a central role in formulating these ideas and explains them in more detail in this talk.

This image from her slides shows that the main roles represented in a balanced team are development, design and product management. Each role has obligations and an authority which they bring to any interactions. Fraser describes Balanced Team as more of a work environment than a methodology, essentially a frame of mind about how the team should interact.

In the past product designers have been kept entirely separate from development teams, often in specialised design agencies. They were frequently required to act as “hero designers”, unable to admit any faults and working hard in crunch mode to meet deadlines. It was striking for me to see the similarities with the expectations on data science unicorns. Some of the goals of Balanced Team are to get away from this notion of hero designers, to reduce power struggles and allow more space for people to speak freely and discard failing solutions.

In her talk Fraser describes the obligations and authority for each of the roles. For example the designer in the team needs to be the “empathizer-in-chief”, who understands the customer at an expert level, and can translate their high-value needs into product decisions. Their obligations to the team include honing their craft (as a service to the team) and facilitating balance between other parties within the team. Their main authority is the prioritisation of customer problems in every product conversation.


Monica Rogati’s ‘data thinking’ post

It’s worth noting that as it is currently formulated, the Balanced Team concept does not include any data oriented role. Monica Rogati described what happens in this situation in her recent post on “data thinking”. Rogati talks about how Apple’s Photos product can identify faces in your photo collection and highlights a list of 5 of these people in alphabetical order. Depending on their name, this means your closest friends and family might not appear in the top 5 listing, despite perhaps appearing in most of your photos.

As Rogati describes, a simple application of data thinking, with no complex machine learning or predictive analytics, would reorder these photos in frequency order. The take-away recommendation is that to avoid these product mistakes “you need data thinking to be part of the culture and top of mind, not an after-thought.”

Things that workedian-huston-data-science-in-the-balanced-team-8

With this in mind, as a data scientist, I wondered how I would describe my obligations and authority. I’ve been fortunate to have worked over the last two years in teams with developers, designers and PMs, and in this time we’ve tried different approaches to bringing data science into this process. I’m going to describe some of the approaches that worked for us, some that didn’t, and then try to distill what I’ve learned into a similar form to Janice Fraser’s blueprint.

ian-huston-data-science-in-the-balanced-team-9User research seeks to find the right direction to head in the space of possible products. As part of a balanced team the data scientist has an obligation to use available data to inform lines of questioning for in-person interviews, validate the results of these interviews and identify gaps when interviewees are not representative. It’s great to observe these user interview sessions as a data scientist, because I always come out with a long list of questions to answer from the data. 

A data scientist can also guide user research questions in order to understand the type of predictive models that will be suitable, answering questions about how much ‘explanability’ is needed, and where the line is between useful & creepy for instance.

ian-huston-data-science-in-the-balanced-team-10If your product manager has not worked with a data scientist before, you need to make a big effort to help them understand how you can contribute to the product. If they don’t understand how machine learning and predictive analysis can be effectively used, they will not direct the product discussion to include them.

As part of a balanced team you have an obligation to be part of all product conversations and story generation and to proactively suggest where data thinking could be most effective. Don’t wait for someone to come to you with an idea ‘perfect for some data science’.

ian-huston-data-science-in-the-balanced-team-11Expanding this idea of education, your team will make most effective use of data when ‘data thinking’ is central to the culture and practices of the team. If they have not been exposed to this before you will need to educate and involve them in understanding the available data and analysis techniques you are using. Pairing goes some way to sharing this knowledge, and you can also consider having a ‘show & tell’ to describe data discoveries and explain the moving parts of the model you are building.

As part of a balanced team, you have an obligation to educate your team about the techniques you’re using, the data that is available and what choices you have made in your analysis. The goal is not scrutiny of your work, but building confidence in your approach and results.

ian-huston-data-science-in-the-balanced-team-12At Pivotal we think Pair Programming is the best way to get fast feedback cycles and share knowledge. Data science is no different and we pair as data scientists when possible. We also like to pair with developers and designers to share knowledge of our methods and also get a new perspective on what we are building.

Pairing with developers is particularly useful to continue the journey from exploratory analysis to production code.


Things that did not work

ian-huston-data-science-in-the-balanced-team-13Now let’s consider a few things that we’ve tried, or experienced as part of a team in the past.



ian-huston-data-science-in-the-balanced-team-14In one project we tried to keep our user stories unified from front to backend so overall user value would be apparent. This means that whenever we deliver a story we know that we’ve put together everything necessary for the user to benefit from this feature. Unfortunately it proved quite difficult to work with these large stories in practice.

For one thing, having a single scale for estimation proved difficult to work with and our stories soon became too big to reliably show incremental progress to stakeholders, increasing communication difficulties. We eventually moved to having separate backlogs, which we already had for design work, although this means extra effort needed to keep backlogs in sync.

As part of a balanced team, the data scientist will need to take part in conversations about the engineering backlog (as well as design), and the PM will need to have a good handle on the inter-dependencies between backlogs.

ian-huston-data-science-in-the-balanced-team-15There’s sometimes a tendency to think data scientists should only arrive on a project once an MVP is built and some (usually limited) data is being collected. Even when machine learning is going to be at the core of a product, such as predictive maintenance, there’s sometimes a reluctance to bring data scientists/ML engineers in early on during the product creation phase.

This denies the data scientist the chance to be involved in the early conversations about the feasibility of different product directions, give advice on what early instrumentation to include, and provide context using any existing data sets in the business. As part of a balanced team, I think it’s clear that data scientists can contribute from the very beginning of the project and should ask to join the early product creation.

ian-huston-data-science-in-the-balanced-team-16As expensive and expert resources there is a tendency to spread data scientists across multiple projects to maximise their effectiveness. Continually switching contexts and juggling multiple simultaneous top priorities makes this path more inefficient for team progress as a whole. This lesson has been learned with designers, product managers and others, but now seems to need to be learned again for data scientists.

Being part of a balanced team means putting the team’s success first, which means being available and focused on a single team, a single product. This can result in what feels like inefficient use of your time if you’re not occupied 100%, but the alternative cost to the team of not having you available at the right moment is more detrimental. One way to justify this perceived inefficiency is to calculate the time & money wasted by a development team waiting for their shared data scientist to become available. Often what could have been a simple ten minute conversation can instead turn into days of emails, conference call scheduling and meeting planning, all because the data scientist is juggling other projects.

ian-huston-data-science-in-the-balanced-team-17There is so much hype around data science that it can feel like management expect the addition of a data scientist to instantly solve all existing problems. This is a dangerous situation to get into, and you must work to manage expectations, especially when starting to work on a new problem with many uncertainties. As part of a balanced team, the data scientist has a responsibility to inform the team’s expectations, and gains by sharing the burden of communicating and managing expectations with outside stakeholders.

ian-huston-data-science-in-the-balanced-teamI hope our experiences can help you as you explore the idea of including data science in your balanced product teams. There are many things that could be part of the core obligations of a data scientist in the framework that Janice Fraser describes for a Balanced Team. For me, the data scientist should be the “voice of data” on the team. They should provide deep expertise and understanding about the available data, and be able to identify potential valuable uses and techniques.

More and more we are seeing the implications of unethical uses of data and the data scientist should have the obligation to guard against unjustified (legally and mathematically), unethical and inappropriate uses of data. On the other hand where data is not currently being collected or is insufficient for future uses, the data scientist has an obligation to the team to begin collecting data to facilitate expected future product goals. In addition, data scientists can also facilitate balance in the team. Were I to include another obligation, it would be to “hone your craft” as Janice Fraser describes explicitly for designers.

For me the important authority that a data scientist brings to the team is the ability to improve product conversations with ‘data thinking’ as Monica Rogati suggests. We can make data thinking a natural part of product decisions, in order to reduce the sort of data literacy problems highlighted above.

[In the original talk the final two slides had references to “data” instead of “data thinking”.]



ian-huston-data-science-in-the-balanced-team-1To recap, I think there’s a lot of value in bringing data scientists into your balanced team. This helps make data thinking a central part of the product conversation. The data scientist has the obligation to provide data insights and explore potential uses, all in service to the team. In effect we are trying to break down the walls between data scientists and the rest of the product team.

Thank you to all the great people I’ve worked with as we’ve learned how data science contributes as part of a product team, from Pivotal and our client teams. In particular, I want to thank Janice Fraser for allowing me to reuse and adapt material from her Balanced Team talk slide-deck.

I hope that this is only the start of the conversation about Data Science in the Balanced Team and I look forward to hearing how data scientists are making ‘data thinking’ a central part of their product team’s work.


How to Beat the Traffic (at Strata)

This week I had the opportunity to attend and speak at one of the biggest Big Data conferences of the year.

The Strata conferences run by O’Reilly have been running for the last few years and in many ways have driven the awareness and adoption of data science and predictive analytics.

My colleagues Alexander Kagoshima and Noelle Sio, and I talked about recent work we’ve been doing on how to use machine learning techniques to understand traffic flows in major cities and predict when travel disruptions will end. The talk seemed to be well received and generated a lot of questions and comments both at the conference and on Twitter. This recent post on the Pivotal blog explains more about the projects and the overall goals.

As part of the disruption prediction work I built a simple web app which displays the predictions for currently active incidents.

Video of the talk will be available through O’Reilly, and our slides are available on Slideshare:

If you are interested in this or other projects the Pivotal Data Labs team have worked on, there is a lot more information on the official Pivotal site.


Beat the Traffic at Strata 2014

Strata Conference 2014The next few weeks are going to be busy and one of the reasons is that I am fortunate enough to be speaking at this year’s Santa Clara edition of Strata.

Alexander Kagoshima, Noelle Sio and I are talking in the Machine Data session on Thursday 13th February about “Driving the Future of Smart Cities – How to Beat the Traffic“. It’s the last parallel talk of the day, so perfect timing for figuring out how to navigate the Bay Area traffic on the way home.

We’ll be looking at how in car data sources like GPS locations can enable more intelligent routing which predicts future traffic conditions along your journey.

In addition we’ve taken a look at traffic disruption data in London and created a model which predicts how long a new incident will last, giving you confidence that the collision which blocked your route to work this morning will have been cleared by the time you want to head home. I’ve written a simple web based demo which I hope to show during the talk.

Strata talks are videoed (yikes!) and we hope to make our slides available after the talk. Stay tuned as well for a sneak peek at the transport disruption demo.


2013: A year of personal change

I thought now would be a good time to reflect on 2013 and the changes that have happened in my life.

Officially I have now been out of academia for a year. My postdoc contract ended at the end of 2012, and while I stayed on at QMUL as a visiting researcher, I took the opportunity to travel for a few months with my wife, before setting out to find a new job.

There was no one day that I woke up not wanting to continue in academia, but as my postdoc continued I started to consider what other options I would have instead of just another postdoc. A big help in this regard was my university’s researcher specific career advisor. In particular they organised an event where former postdocs came back to QMUL to describe how they found moving out of academia. All the participants were really honest about their hopes and fears during the transition which I found refreshing compared to the polarised “it will be awful/great” that one often hears. There has been a lot of talk on Twitter recently about how to prepare grad students for life beyond academia and I definitely think these kind of events bring in voices with experience of working outside academia.

Even mainstream publications are now aware that we are in the era of Big Data and that a new role of data scientist has appeared. A data scientist is some part programmer, researcher, statistician and domain expert, best illustrated with a Venn diagram. For me the combination of research, programming and mathematics seemed like a really good fit, but I knew I wouldn’t have all the necessary skills straight from my postdoc.

I worked on a few different Coursera courses, in particular Andrew Ng’s Machine Learning course, so that I would know the vocabulary of machine learning and data science and be able to identify the different machine learning methods. There are a lot of data science resources online and a particularly good collection is the Open Source Data Science Masters.

After some searching I joined Pivotal, a new company formed out of parts of EMC and VMware earlier this year to form a coherent strategy around the big data assets of those companies. I’m now part of the Pivotal Data Science team. We help our customers to make the best use of their data to solve specific business problems. It’s a great group of people with varied backgrounds and lots of experience dealing with real world (very!) big data problems (we’re hiring by the way).

I joined about six months now and it’s been a great experience so far. The team are really fun to work with and I’ve learnt a lot about both data science and business. It looks like 2014 is going to be very busy in a good way.


Data diving for charity

Last weekend I took part in the first London DataDive, a charitable event organised by DataKind, who previously organised similar events across the US. The basic premise is that charities have collected large amounts of data, on donors, fund-raising and the actual care, help or interventions they provide. Without costly analysts to sort through and make sense of the data, it goes unused, providing little or no value to the organisation.

Datakind wants to solve this problem by organising business consultants, data scientists and other analysts to provide pro bono services to the charities over the course of a weekend. The basic format is similar to a hackathon, with Friday night being spent networking, learning about the problems of the charities and picking one to work with. Saturday is spent working on the data to provide actionable results for the charities. These results are presented on Sunday morning along with any considerations or suggestions from the data scientists.

The three charities at the London event were Oxfam, Place2Be and Keyfund. Having been intrigued by Hannah of Keyfund’s speech on Friday night I opted to help them over the weekend. Keyfund work with young people to develop their skills and confidence through small projects which are conceived, planned and implemented by the young people themselves. Keyfund coordinates the assessment and funding of these projects through partnerships with local organisations across the country.

OKeyfundver the weekend we analysed Keyfund’s data in a number of ways. In particular we considered the demographics of the children in the scheme, quantified the outcomes in terms of self assessments and skills profiles and assessed the likely effect of streamlining their process into fewer stages. Hopefully the results will be of use to Hannah and the Keyfund team in assessing their procedures and convincing funders to support this worthy cause.

On the technical side I took this opportunity to learn more about the Pandas library by Wes McKinney, which provides a structured data companion to Numpy‘s more homogeneous arrays. The accompanying jargon is quite similar to R, with data frames and series in place of arrays and vectors. Some elements took a bit of getting used to, but one powerful feature is the deep connections with Matplotlib, allowing easy creation of histograms and box plots from data frames. I hope to look more into Pandas, having just bought Wes McKinney’s new book “Python for Data Analysis“.

I really enjoyed the first international Datadive and really appreciate the work that organisers Jake Porway and Craig Barowsky put in to make everything run smoothly. The atmosphere was great throughout the weekend, including late into the night on Saturday and the participation from everyone involved was inspiring. At a time when the gender imbalance in science and technology is making headlines, it was also great to see an event where this wasn’t an issue in the slightest. Overall I would heartily recommend to anyone involved in data to give something back to the communities you live in by participating in one of these events. Plans are under way for more events of this kind in London and I will be jumping at the chance to get involved again.

Update: Just noticed that Dirk Gorissen who was on my team has a nice writeup with some results (including one of my graphs).