Category Archives: General

new/s/leak demo @ SPIEGEL

Now that we’re in the middle of new/s/leak’s home stretch, we had a final demo at SPIEGEL in Hamburg. After some exciting and productive development sprints, we proudly introduced the software to journalists, documentarists and software developers, who gave us the best feedback by playing around with the tool and becoming absorbed in using it. Some evidence:

We also collected some more systematic feedback, which helped us prioritizing the remaining tasks. Thanks to everyone who came along, played and gave feedback – we had a blast at the meeting, and we learned a lot!

If you also want to see what changed  in new/s/leak since we have shown it to an academic audience at ACL: here is the link to the demo (please use the Chrome Browser!)

For a quick introduction, you can also watch a video (from our academic publication @ VIP):

During the upcoming weeks until christmas, we’ll add some more requested features, fix some bugs, and create an easy-to-deliver software package. Stay tuned for a deployable version!

The Science behind new/s/leak I: Language Technology

Because of the Easter holiday season and several conference deadlines, this blog had to take a little break. Being back, we want to give a glimpse on the science behind of new/s/leak.

We have two camps of scientists working together: computational linguists contribute software that extract semantic knowledge from texts, and visualization experts who  bring the results to smart interactive interfaces that are easy to use for journalists (after the computational linguists made the dataset even more complicated than before).

In this post, we will explain some of the semantic technology that helps to answer the questions “Who does what to whom – and when and where?”. The visualization science will be covered in a later feature.

Enriching texts with names and events

The results of the language technology software are easy to explain: we feed all the texts we want to analyze into several smart algorithms, and those algorithms find names of people, companies and places, and they spot dates. On top of those key elements (or “entities”), we finally extract the relationships between them, e.g. some person talks about a place, leaves a company, or pays another person. Finally, we are ready to put all of this into a nice network visualization.

NE-schema

Entity and Relation Extraction for new/s/leak

We hope that you’re not ready to accept that all of this simply happens by computational magic, so let’s dig a bit deeper:

(Disclaimer: This is not a scientifically accurate explanation, but rather a vey brief high-level illustration of some science-based concepts.)

Identifying names – 🍎 vs. 

Identifying words that name people, organizations and so on is not as easy as it might sound. (In Computational Linguistics, this tasks is called Named Entity Recognition, in short: NER).

Just looking through a big dictionary containing names works sometimes, but many names can also be things, like Stone (that can be Emma Stone or a rock) or Apple (which can be food or those people who are selling the smartphones).  Within a sentence however, it’s almost always clear which one is meant (at least to humans):

“Apple sues Samsung.”

..is clearly the company, whereas

“Apple pie is really delicious.”

probably means the fruit. The examples also show that just checking for upper or lower case is not sufficient, either.

What the algorithms do instead is first deciding whether a word is a name at all (as in the  case), or rather some common noun (that’s the 🍎 case). There are two factors that decide that: first, how likely the string “apple” is to be a name, no matter in which context. (Just to put some numbers in, say the word apple has a 60% likelihood of being a company, and 40% to be a noun.) Additionally, the algorithms checks the likelihood to have a name in the given context. (Again, with exemplary numbers:  any word, no matter which one it is, in the beginning of a sentence followed by a verb, has a likelihood of 12% to be a name; followed by a noun, the likelihood is 8%, and so on).

With this kind of information, the NER algorithm decides whether, in the given sentence, Apple is most likely to be a name (or something else).

In the final step, the algorithm uses similar methods to decide whether the name is more likely to belong to a person, a company or a place.

There are many different tools for named entity recognition; new/s/leak uses the Epic system.

Timing!

In principle, extracting dates (like “April 1st” or “2015-04-01”) works very similar to extracting names. But often dates are incomplete – then we need more information: If we only find “April 1st” with no year given, we need some indicator which year could be meant. In our case, the algorithm checks the publishing date of the document (which we almost always have for journalistic leaks) and defaults all missing years with the publishing year.

The extraction of time expressions in new/s/leak is done with the Heideltime tool.

Finding relations (or events)

Now that we found that somewhere in our text collection are  Apple and Samsung, and both are companies, we want to know whether or not they actually have some business together, and if so, how they are connected. The algorithms behind this do a very human-like thing: they read  all the texts and check whether or not they find Apple and Samsung (as companies) in the same document, and if so, they try to find out whether there is some event (like “suing” in the sentence above) that connects the two directly. There might also be multiple such relations, or they might change over time – then we try to find the most relevant ones. Relevant events in our example are things mentioned frequently for Apple and Samsung, but rarely in other contexts. E.g. if we find additionally the sentence “Apple talks about Samsung” somewhere, talking would probably be less relevant than suing (from  “Apple sues Samsung”), because talking shows up more often than suing and is not very specific for the Apple / Samsung story.

To find relations between entities, we use the same system employed in the Network of Days, together with relevance information computed by JTopia.

Now that we have all this information about people, organizations, times and places, the software of our visualization scientists can display them together into one interactive graph. This visualization science part will be covered in one of the next entries.

Requirements Management

User requirements management is something that happens far too rarely, especially in scientific software. (And it can definitely be challenging.)

For our project that brings together so different worlds of science and journalism, and also different academic disciplines, it’s even more important. We dedicated this a whole day on which we had Franziska and Kathrin over at SPIEGEL in Hamburg – and we proved that requirements analysis can be both, challenging and fun at the same time.

Overall,  Kathrin and Franziska interviewed four journalists from different newsrooms that showed the whole diversity of potential new/s/leak user groups .

New Priorities

Some of the journalists’ answers were interesting just because they prioritize things we thought were maybe nice from our point of view, but maybe not so important to the end user. So here is the top 3 of surprising lessons learnt:

  1. Metadata that comes with the documents is even more important than we thought. Our software thus should not just display some selected metadata features (like time and geolocations), but rather show everything we can extract from the data, including e.g. also data types and file sizes. (One showcase for the journalistic value of metadata is this feature about the Hacking Team Leak.)
  2. Source Documents have to be always accessible. Our initial idea was to focus on the network of entities and to show the documents just on demand – but the journalists need a direct way to the original documents in each view, and then filter the documents by selecting certain entities, entity relations, time spans or other metadata.
  3. Networks are an utterly intuitive concept. Many concepts and figures from network theory (like centrality, connectedness, outliers…) have intuitive counterparts (“Who is in the center of all of this?”,”Who is best connected to whom?”, “Can I see who’s at the top of the communication hierarchy?”), and can provide crucial information. That’s good news, and that also means that we have to be even more flexible when computing the connections in the network.
Scribbling User Requirements

Drafting the next new/s/leak version after the interviews

User-Specific needs

Some functionality needs to be highly adaptable to meet the needs of different user groups and different working styles. The focus here is on two things:

  1. Powerful tagging functionality. We need to support free-text tagging, bookmarking and simple markers like “important” vs. “unimportant”. This allows users to create their own metadata.
  2. Transparency. Some users prefer precise results over extended functionality, other users (especially people working under time pressure) would sacrifice a bit of accuracy for more automated support to filter the data. To meet both needs, we will provide as much automated support as possible, but at the same time, we will clearly indicate what the machine generated, how confident we are about the machine’s result, and which part of the information is genuine (as in: was part of the source documents).
new/s/leak sketch

The scribbled wireframe (with some annotation)

The productive day at SPIEGEL was concluded with some final discussions, first drafts of wireframes, and coffee (see pictures).

Our next goal is to finish a first stand-alone prototype, with a special focus on relation extraction for the network.

Science + Data Journalsim = new/s/leak

On January 1st, we officially started to build our “Network of Searchable Leaks” or, in short: new/s/leak. Our goal is to put the lastest reseearch of language technolgy and data visualization together to help journalists keeping their heads over water when meeting a dataset like the famous Cablegate. The idea is to have a network of all actors (people, organizations, places) and show who will do what, with whom, where, and when.

What sounds like magic is actually feasible using current research results: sceptics might want to look at the Network of the Day (in German), which will be the starting point for our new tool.

At some point, we want to arrive at something ressembling this sketch from our project proposal:

Wireframe from proposal

An early wireframe for our software

 

The first kickoff with all project players in one room happend on January 18 (after several internal kickoffs and the meeting at Datenlabor): we were all warmly welcomed and well-caffeinated guests of our Visualization Colleagues from Interactive Graphics Systems Group at TU Darmstadt. We had lots of constructive discussions about journalists’ needs, search, visual data representations, and our project name (which was the only question we had to postpone).
The most important outcome is that we are on a good way:

Four TU Darmstadt computer science students (Lukas Raymann, Patrick Mell, Bettina Johanna Ballin and Nils Christopher Böschen)  already built a prototype as their software project. It shows a network of entities from the underlying documents, together with a timeline:

 

The first new/s/leak prototype


 
The screenshot offers a glimpse on something which which could have helped the people that had to work double shifts to browse the 2 million records of the Cablegate Leaks – if new/s/leak had been around at that time already.

The next steps will bring more search functionality, dynamic changes in the network, and more data.

Tagged

We made it!

Happy news: VW foundation officially decided to fund our project with the working title DIVID-DJ: Data Extraction and Interactive Visualization of Unexplored Textual Datasets for Investigative Data-Driven Journalism.
We are one out of eight projects funded as a part of the initiative “Science and Data Journalism”. Our goal is to create a piece of software that visualizes the content of large text data collections, to help journalists working with data leaks.

VW foundation invited all project partners to a kickoff meeting at TU Dortmund, where all projects were introduced prior to the “Daten-Labor” conference of Netzwerk Recherche. The project funding will officially start in January 2016.

More details come!