Monday, April 24, 2017

#OSM - Districts of #Kerala


A Wikipedian asked me to blog about this map. The map is shown from within the English Wikipedia. It works really well on my mobile (an i-phone). The next step, integrating multi layered maps in articles.. and on a mobile?
Thanks,
       GerardM

For some documentation..

Thursday, April 20, 2017

#Wikidata user stories - Suggesting Henry Putnam, a great #Librarian

As software suggest what articles to write, it is relevant to understand what logic it is based on. Phenomena like the "six degrees of separation" made popular around Kevin Bacon has its scientific approach in graph theory "betweenness centrality". This is used as a basis in the research that what articles are important and what automated suggestions to make.

Mr Putnam is one of the more relevant librarians. He developed an eponymous classification system, continued its development as the Librarian of Congress (it is still in use), was twice president of the American Library Association and was a knight of the order of the Polar Star. When weight is applied to references to a person, all this is of relevance in the right setting.

When an article is to be written or improved, it helps when it can be suggested what it is that can be improved. By including statements in Wikidata suggestions can be made based in the local language. Facts like date of birth and death are also easy and obvious.

So when people consider a particular subject to be of universal relevance, it helps when associated subjects are well developed in Wikidata. When for all the presidents of the American Library Association many facts like where they studied, where they worked and what awards they received are included. When this is done for all the people who share categories, the betweenness of many influential librarians increases. This will have its influence on what is suggested for people to do.
Thanks,
       GerardM

Wednesday, April 19, 2017

#Wikidata user stories - the sum of all #knowledge


Map showing all places English Wikipedia covers


Map showing all places GeoNames covers

They say "a picture paints a thousand words". There is no argument; English Wikipedia covers only so much. With such a lack of coverage it is impossible to understand what is missing and its relevance particularly to people who do not read English.

LSJbot has created lots of articles for the places GeoNames knows about in several Wikipedias. As a consequence through the backdoor much of the missing information enters Wikidata. There have been some rumblings among Wikidatans that the GeoNames data is not perfect.. But hey, let's make "Be bold", a Wikipedia quality a Wikidata quality as well.

For many Wikipedians, the notion of bot generated articles is an anathema. For others the fact that there is so much that we do not cover is as problematic. The good news is that more information in Wikidata will enable us to predict what is lacking in content. We only need to acknowledge that Wikipedia is not the sum of all knowledge.. yet.
Thanks,
      GerardM

#Wikidata user story - Suggestions to #Wikipedia editors

Exciting is the #research done on "suggestions to Wikipedia editors". There is a paper and a great presentation. The bottom line is that when you know what to suggest to people; when you make it personal, the result is what you would hope. Consider, 3.2 times the number of articles created and two times more articles created than without personalised recommendations.

There is math involved, obviously, but the gist is that when suggestions are in line with previous activities, people will be triggered to do more. When you listen to the presentation, this first experiment asks people to translate from English. The assumption is that English covers more than most.

The slides of the presentation include visualisations showing the coverage of several Wikipedias. When you consider them, it becomes clear where the Wikimedia projects are challenged.

Leila Zia, the presenter makes it clear; all this would not be possible without Wikidata. One thing where Wikidata is different from the assumptions of the research is that there is an increasing number of subjects that have no links to Wiki(m/p)edia articles at all. Many of these are connected to existing content as they share common statements, statements like "profession: soccer player" of "award received: whatever award".

When totally new subjects are to be considered, there is already plenty that might be suggested in Wikidata itself.
Thanks,
      GerardM

Monday, April 17, 2017

#Wikidata user story - #DBpedia, #death and #Federation

Federation between DBpedia and Wikidata became possible. As a consequence, the results of a query that runs on DBpedia can be linked to Wikidata.

Some time ago people at DBpedia created a wonderful query that shows differences between DBpedia and the Dutch and Greek Wikipedia. It received approval from the Dutch Wikipedia community.

With federation something much more interesting became possible; a federated query comparing Wikidata with one DBpedia at a time. When the query runs, current data from Wikidata and DBpedia is presented.  When a Wikipedia associated with  DBpedia changes, DBpedia may import the differences from a RSS-feed and consequently running the query again will show the latest differences.

Updating information about one particular type of statement like date of death, place of death or whatever, will always be based on the current differences.. Experiencing the results in this way is truly motivating. Federation is an instrument that can helps us improve the quality of either federated system.
Thanks,
      GerardM

#Wikidata user story - #Wikipedia #diversity and diversity #research

Diversity, especially the "gender gap" is one of the best researched subjects of Wikipedia. There are many projects that have it as their goal to diminish the gap they object to.

Wikidata has the best and most up to date information about any Wikipedia. People are updating Wikidata all the time, typically its information is based on a Wikipedia.

Take gender; many a Wikipedia has a category for this so it is easy to update Wikidata based on what is in such categories. When a researcher is interested in the articles where Wikidata does not have such information, articles will be found and it is appreciated when Wikidata is updated by them as part of these activities. As a rule, the percentage of "humans" with no known gender is dropping anyway.

When a Wikipedia editor has an interest in female scientists that do not have an article in English, it is easy enough to have a query for that. Not all female scientists with or without a Wikipedia article can be found this way but it is just a matter of adding them in Wikidata. When another editor is interested in female scientists with no article in German of Kannada, it is just one change in the same query.
Thanks,
        GerardM

#Wikidata user story - the #library

The OCLC is an organisation combining most of the libraries in the world. It used to connect to the English Wikipedia but as Wikidata connects all Wikipedias, the OCLC does a better job linking to Wikidata. Through Wikidata it can link to articles about authors in any language.

For many authors the connection between VIAF, the system used by the OCLC and Wikidata is still missing. Many people are adding VIAF identifiers and once a month the data is imported and all the new data pops up.

Best practice at English Wikipedia has it that an {{authority control}} template is added in the reference section of people. When a VIAF identifier is added in Wikidata not only a VIAF identifier but also Worldcat information is shown (the example is for William Keepers Maxwell Jr.). Doing this is possible for any Wikipedia.

Now to expand on this; when a reader opts in, we could show if a book of an author is available in the local library.. What do you think?
Thanks,
       GerardM

Why #Wikidata? Because it is useful!

Wikidata was useful from the start. It provides a service to all Wikipedias and after the startup, it now provides the same service to Commons and Wikisource. It connects information about the same subject, they are the interwiki links.

The next phase was to connect these subjects. This is an internal Wikidata project and it not really used. This data could be useful but it is not always up to date and the requirements for the primary use cases are not realistic and almost impossible to fulfil. The challenge is to provide sourced information for every statement.

The challenge is: how do we provide a use for the Wikidata data. How do we get people to actually use Wikidata, have an interest in the data and maintain what is in their interest.

Software developers create "user stories" to explain what their software is to achieve. Why not write user stories that show how Wikidata can already be used and expand the stories on how to be even more useful and usable?
Thanks,
      GerardM

Sunday, April 16, 2017

#Wikipedia - The death of Lanier Meaders

Mr Meaders was a notable potter who died in February 1998 according to folkpottery.com. The English Wikipedia article however is in two minds about his death. Yes he is dead but when did he die?

According to the category he was one of the living death for 10 years. In the text the year of his demise is correctly stated as 1998. By googling for a source another date was found.

As I am not an English Wikipedian, I do not know how to indicate sources in English Wikipedia. The date of death in Wikidata does have a reference. The question is how differences like the dates of death of Mr Meaders are found and improve the consistency in the information that we provide in all of our projects.
Thanks,
      GerardM

NB the information in Wikidata on Mr Meaders is not complete.

Thursday, April 13, 2017

#Wikidata - People die; implications for another #policy approach

People die, notable people die. It is natural and it happens all the time. Many a #Wikipedia has a category for the people who died in a specific year. Such categories are what makes a wonderful tool by Pasleim tick. It shows those Wikidata items that have no date of death while a Wikipedia knows about the demise of the person involved.

This is a wonderful tool; it allows Wikidata to take care of those who died and update its data. It leaves us with another option and add one more tool. A tool that checks if the date of death exists in the Wikipedias that do not have such a category.

Consider this; a date of death is relevant when you consider the "Biographies of Living People". Having complete information for people is important. So why not flip our approach to the BLP and provide tools to improve the existing information in all of our projects?

First things first; the objective is to signal the death of a person. As is the current policy, it is up to every project to do with it as it likes. What should follow is looking for sources when one is available and preferably add at least one to Wikidata for re-use.

What are the benefits; a positive approach to maintenance and invite people to do something that actually matters now. It is an invitation to read the article and see what more can be done to get in into shape.

When the date for a death exists in an article, the article will be removed from the articles that need attention. There are plenty of valid approaches to this.

Improving user engagement is one of the objectives of the Wikimedia Foundation itself. I really want the WMF to include active engagement where it makes a difference and be as pro active as it can in this field. This is a positive approach and that is what we badly need.
Thanks,
      GerardM

Saturday, April 08, 2017

#WhiteHouse Fellows - Mrs Margarita Colmenares

Mrs Margarita Colmenares is a White House Fellow. A message was posted on Twitter that her article had been created and to support the message, it was easy enough to add her on Wikidata as well. The article mentioned that she was a White House Fellow and adding one layer of additional information is one way of making a person more relevant.

Adding this fellowship and adding other people who were a fellow was easy enough. The Wikipedia article referred to the website of the White House for information and when you visit its website you will be thanked for having an interest in this subject.

At a time like this it is good to consider Archive.org.  Its crawler worked well at some dates for other dates the message you will see is: "Got an HTTP 301 response at crawl time".

Anyway.. Together, the information at whitehouse.gov and at archive.org provide enough of a reference.
Thanks,
     GerardM

Friday, April 07, 2017

#Wikidata - #Perfection or #progress

When you consider the intention of the "BLP" or the "Biographies of Living People", you will find that it is defensive. It is the result of court cases brought against the Wikimedia Foundation or Wikipedians by living people. The result was a restrictive policy that intents to enforce the use of "sources" for all statements on living people.

The upside was fewer court cases and the downside; administrators who blindly applied this policy particularly in the big Wikipedias. Many people left, they no longer edit Wikipedia.

At Wikidata there are proponents of enforcing a BLP explicitly so that they have the "mandate" to block people when they consider them too often in violation of such a policy.

For a reality check; there are many known BLT issues in Wikidata that are not taken care of. There are tools like the one by Pasleim who make it easy to do so. There have been no external complaints about Wikidata so far but internal complaints, complaints about the quality of descriptions for instance, are easily waved away.

The implementation of a "DLP" or "Data of Living People" where "sources" are mandatory would kill much of the work done at Wikidata and will not have an effect on the existing backlog. Killing the backlog removes much of the usability of Wikidata and will prove to be even worse.

In order to responsibly consider new policies, first reflect on the current state of a project. What issues need to be addressed, what can be done to focus attention on the areas where it is most needed. How can we leverage what we know in other projects and in external sources. When it is really urgent make a cost analysis and improve the usability of our software to support the needed progress. And yes, stop insisting on perfection; it is what you aim for, No one of us is in a position to throw the first stone.
Thanks,
      GerardM


Wednesday, April 05, 2017

#Wikimedia and our #quality

In Berlin, the Wikimedia Foundation deliberated about the future. A lot of noble intentions were expressed. People went home glowing in the anticipation of all the good things they want. It is good to talk the talk and follow up and walk the walk.

A top priority for Wikidata is that it is used and useful. As it becomes more useful, quality becomes more of a priority for the people who use it. They will actively curate the data and remedy issues because they have a stake in the outcome.

So far Wikidata is largely filled with information from all the Wikipedias and this process can be improved substantially. For this to happen there is a need for more complete and up to date data. So what use can we give this data so that it gains use, and thereby gains value?

What if .. What if Wikidata could be used as an instrument to find the 4% of wiki links in Wikipedia that point to the wrong articles? With some minor changes to the MediaWiki software this can be done. This approach is described here for instance.. The beauty of this proposal is that not all the Wikipedians have to get involved, it is for those who care, for the rest it is mostly business as usual.

There are other benefits well. When it is "required" to add a source to a statement like "spouse of", it should be or is a requirement on the Wikipedia as well. When the source is associated with the Wiki link or red link for that matter, it should be possible for Wikidata to pick it up manually or with software.

When content of Wikidata more closely mirrors information of a Wikipedia in this way, it becomes easy and obvious to compare this information with other Wikipedias. Overall quality improves, but as relevant, the assurance we can give about our quality improves.

When we consider Wikimedia for the next 15 years, I expect that we will focus on quality and prevent bias not only by combining all our resources but also by reaching out to other trusted sources. By working together we will expose a lot more fake facts.
Thanks,
       GerardM

Sunday, April 02, 2017

#Wikidata - #Quality is a #perspective.

Forget absolutes. As an absolute quality does not exist for Wikidata. At best quality has attributes, attributes that can be manipulated, that interact. With 25,430,779 items any approach to quality will have a potentially negative quality effect when quality is approached from a different perspective.

Yet, we seek quality for our data and aim for quality to measurably improve. There are many perspectives possible and they have value, a value that is strengthened when it is combined with other perspectives.

At the Wikimedia Foundation, the "Biographies of Living Persons" or BLP has a huge impact. When you consider this policy, it is about biographies, a Wikipedia thing and this is not what Wikidata does. It is important to appreciate this as it is a key argument when a DLP "Data of Living Persons" is considered. Important is that the BLP focuses on articles for living people and its aim is to prevent law suits from articles that have a negative impact on living people.

Data is different, it is used differently and it has an impact in different ways.  Take for instance notability; a person may be notable and relevant because of having held an office or receiving an award. In order to complete information on the succession of an office or an award, it is therefore essential to include all persons involved in Wikidata. At the same time, when information is incomplete it can have an impact on a person as well. "you did not get that award because Wikidata does not say so".

Wikidata is incomplete and immature. Given the different perspectives on a DLP, most of them are not achievable in short order. The people who insist on a "source" for any statement will wipe most of the Wikidata statements and force it to a stand still. The people who insist on completeness have an impossible full time job for many years to come.

So what to do? Nothing is not an option but seeking ways to improve both quality and quantity is. A key value of Wikidata is its utility. The "Black Lunch Table" is one example of giving utility to Wikidata. They use Wikidata to manage the Wikipedia articles they want to write and expand on the notability of artists by including information on Wikidata. All the information helps people to write Wikipedia articles. Quality is important. Being included on the Black Lunch Table means something; artists are considered to be notable and worthy of a Wikipedia article.

Another example is using the links to authors so that people can read a book.

Given the size of Wikidata, it is impossible to get everything right in short order. When we can get people to adopt subsets of our data, these will grow. Our data will be linked. When we get to the stage where people actually object to data in Wikidata, we have improved both our quantity and quality substantially. As it is, looking at all the data, typically there is little to object to and that is in itself objectionable.
Thanks,
     GerardM

#Wikimedia - First a #strategy, then #Action

The people at Open Library have books they love to share. They are in the process of opening what they have even more.

In a previous post it was mentioned that there is a JSON document to getting information on authors like Cicero. There are many works by Cicero and today they have a JSON document in production for the books as well.

So what possible scenario is there for the readers of any Wikipedia; they check in Open Library what books there are for Cicero (or any other authors). They download a book and read it.

Where we are:
  • there is an API informing about authors and their books at Open Library based on the Open Library identifier.
  • an app can now be build that shows this information
    • this app could use identifiers of other "Sources" like Wikidata, VIAF or whatever on the assumption that Wikidata links these "Sources".
    • this app could show information based on Wikidata statements in any language using Wikidata labels.
    • this app may download the book (maybe not yet but certainly in the future)

What next:
  • investigate the JSON and see what we already can do with it
    • publish the results and iterate
  • Add more identifiers of authors known to Open Library to Wikidata
    • there are many OL identifiers in the Freebase information; they need to be extracted and a combined list of Wikidata identifiers and OL identifiers allows OL to curate it for redirects and we can then publish.
  • Raffaele Messuti pointed to existing functionality that retrieves an author ID for Wikidata and VIAF using an ISBN number.
    • Open Library knows about ISBN numbers for its books. When it runs the functionality for all the authors where it does not have a VIAF identifier it can enrich its database and share the information with Wikidata.
    • Alternatively someone does this based on exposed information at Open Library.. :)
  • We add a link to Open Library in the {{authority control}} in Wikipedia
  • We could add information for nearby libraries like they do in Worldcat [1].
  • We can measure how popular it is; how many people we refer to Open Library or to their library.
At the Wikimedia Foundation we aim to share in the sum of all knowledge. We aim to enable people acquire information. Making this happen for people at Wikipedia, Open Library and their library is part of this mission we just have to be bold and make it so.
Thanks,
      GerardM

Saturday, April 01, 2017

#Wikimedia - Sharing all #knowledge

It is strategy time at the Wikimedia Foundation. For me the overarching theme is: "Share in the sum of all knowledge". Ensuring that knowledge, information is available is not only an objective for us, it is an objective we share with organisations like the Internet Archive and the OCLC.

One of the activities of Open Archive is the "Open Library". It provides over the Internet access to books that are free to read. At Wikidata we include links for authors that are known to the Open Library so all it takes is for a Wikipedia to have a {{authority control}} on its authors and a link to Open Library has been provided.

When you work together, a lot can be achieved. A file with identifiers for authors has been sent to the OCLC en Open Library. The reaction is that in the JSON for these authors Open Library includes a link to both VIAF (a system by the OCLC) and Wikidata. This is the JSON for Mr Richard W. Townshend.

The next step is to optimise the process of including identifiers for both VIAF and Open Library. What we bring in is our community. We have done a lot of work using Mix'n Match. We do add identifiers when it seems opportune and we already function as a stepping stone between Open Library and the OCLC. So when we can target attention in Mix'n Match per language, it already is a lot easier to make a match. It may be possible for the OCLC and Open Library to match authors through publications and in that way technology is a deciding factor.

In the end there is only one point to all this: share in the sum of all knowledge. We all have a part to play.
Thanks,
       GerardM