Sunday, May 19, 2019

#Scholia: on the "requirement" of completeness

Scholia, the presentation of scholarly information on authors, papers, universities, awards et al is at this time not included in the "Authority control" part of a Wikipedia article. The reason I understand is because Wikipedians "that matter" insist that its information is to be complete.

That is imho utter balderdash.

The first argument is the Wiki principle itself. Things do not need to be complete, in the Wiki world it is all about the work that is underway. The second is in the information that it provides: its information is arguably superior to what a Wikipedia article provides on the corpus of papers written by an author. The third is that with the prospect of all references of all Wikipedias ending up in Wikidata, value is added when a paper can be seen in relation to its authors and citations. It matters when it is known what citations a paper is said to support. It matters that we know the papers that are retracted. The fourth argument is in the  maths of it all; typically scientific papers have multiple authors. It takes only one author with an ORCiD identifier to get its papers included. The other authors have not been open about their work, it is their own doing why they are not known in the most read corpus on the planet. They still exist but as "author strings". When a kind soul wants to remove them from obscurity they can.

As to the "Katie Bouman"s among them? There are many fine people that are equally deserving, that have not been recognised yet for their relevance. Fine people that have a public ORCiD record. For them it is feasible to have their Scholia ready when they are recognised. For the others, well it is not a Pokemon game, it is a Wiki.
Thanks,
      GerardM

Sunday, May 12, 2019

@Wikidata Women in science - Lesley Wyborn

For Lesley Wyborn a Wikipedia article exists. She "built an international reputation for innovative leadership in geoinformatics and global e-research, particularly in the geoscience area" according to the motivation for the "Outstanding Contributions in Geoinformatics" award. Notability, no issue.

When the article was written in 2016, no attention was given to the "authority control" and consequently in 2018 an additional item was created with an ORCID identifier. In 2019 additional work was done and the two items were merged. A Google Scholar identifier and the award was added potentially addressing the issues raised on the Wikipedia article.

Arguably both the Wikidata and the Wikipedia information could be more informative. However, given that both are Wikis that is quite acceptable. It is quite likely that many more papers are already on Wikidata and just need attribution. That is something for others to do.. we are a community remember.
Thanks,
     GerardM

Thursday, May 09, 2019

How and when I trust science, when would you trust science?

When something drops on my head, it is gravity that brings it down. When I travel to the USA, the shortest route is over Iceland, the world is round. I did not get polio, measles or whooping cough, my parents had me vaccinated. I worked in computing, most of the women were better then the men, my observation, I am happy working for women.

When I read articles in Wikipedia, I know that I can trust it up to a certain level because there are citations indicating that something is true or that a given opinion is held. Its neutral point of view means that equal weight is to be given to opinions but not when it flies in the face of proven facts, the science about a subject. The best news; when scientific papers are retracted, we start to know about this and act upon it in Wikipedia. The nonsense, the preconceptions, the paid for science is to be removed once it is retracted.

In the Netherlands a prominent scientist has been tasked to root out those medical practices that are proven not to work. His work will be hard he will have to deal with vested interests, ingrained practices and a public that wants everything to be as expected. People will still be vaccinated, some medications will no longer be available, some treatments will not be there, they do not work even when you are desperate for them to work..

That is me, now you, when can you trust.. Well it is good to be wary, just consider the numbers. When a politician says he was effective because many drug dealers went to jail, ask yourself why should they be in jail, did your community end up safer? If not, not much was achieved. When scientific papers show the numbers of junks go down when substance dependence is treated as a medical and not as a criminal issue. Wonder what this meant for the communities these people come from. Seek out the numbers and you are no longer talking politics but considering the science of it.

A lot of so called science defends points of view that do not fit facts on the ground. This can be tricky/tough to understand because the difference may be local versus global. World wide, temperatures go up. Our climate is no longer stable and yes, in the USA it has been cold lately.. not so in Europe, Africa, Asia. One thing to consider, is it truly science, peer reviewed and everything or is it to shore up a point of view.. A tell tale is when it is from a "research institute" / "policy institute" paid for by an interested party.
Thanks,
      GerardM

Tuesday, April 23, 2019

Scopus is "off side"

At Wikidata we have all kinds of identifiers for all kinds of subjects. All of them aim to provide unique identifiers and the value of Wikidata is that it brings them together; allowing to combine the information of multiple sources about the same subject.

Scientists may have a Scopus identifier. In Wikidata Scopus is very much a second rate system because to learn what identifiers goes with what people requires jumping through proprietary hoops. Scopus is the pay wall, it has its own advertising budget and consequently it does not need the effort of me and volunteers like me to put the spotlight on the science it holds for ransom. When we come across Scopus identifiers we  include them but Scopus identifiers are second class citizens.

At Wikipedia we have been blind sighted by scientists who gained awards, became instant sensations because of their accomplishments. For me this is largely the effect of us not knowing who they are, their work. Thanks to ORCiD, we increasingly know about more and more scientists and their work. When we don't know of them, when their work is hidden from the real world, I don't mind. When we know about them and their work in Wikidata it is different. It is when we could/should know their notability.
Thanks,
      GerardM

Sunday, April 14, 2019

The Bandwidth of Katie Bouman

First things first, yes, many people were involved in everything it took to make the picture of a black hole. However, the reason why it is justified that Katie Bouman is the face of this scientific novelty is because she developed the algorithms needed to distill the image from the data. To give you a clue about the magnitude of the problem she solved; the data was physically shipped on hard drives from multiple observatories. For big science, the Internet often cannot cope.

There are eternal arguments why people are notable in Wikipedia. For a lot of that knowledge a static environment like Wikipedia is not appropriate and this environment is causing a lot of those arguments. To come back to Katie, eh every scientist, their work is collaborative and much of it is condensed into "scientific papers". One of the black hole papers is "First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole". There are many authors to this paper not only "Katherine L. Bouman". When a major event like a first picture of a black hole is added, it is understandable that a paper like this is at first attributed to a single author..

Wikimedia projects have to deal with the ramifications of science for many reasons. The most obvious one is that papers are used for citations. To do this properly, it is science who defines what is written and not selected papers to support an opinion. The public is invited to read these papers and the current Wikipedia narrative is in the single papers, single points of view. This makes some sense because the presentation is static. In Wikidata the papers on any given topic are continuously expanded, the same needs to be true for papers by any given author. Technically a Wikipedia could use Wikidata as the source for publications on a subject or by an author. The author could be Katie Bouman and proper presentations make it obvious that the pictures of a black hole were a group effort with Katie responsible for the algorithms.
Thanks,
       GerardM

Tuesday, April 09, 2019

@Wikidata is no relational #database

When you consider the functionality of Wikidata, it is important to appreciate it is not a relational database. As a consequence there is no implicit way to enforce restrictions. Emulating relational restrictions fail because it is not possible to check in real time what it is that is to be restricted.

An example: in a process new items are created when there is no item available with an external identifier. Query indicates that there is no item in existence and a new item is created. A few moments later the existence of an item with the same external identifier is checked using query. Because of the time lag that exists, what is known to be in the database and what actually is in the database differs and query indicates there is no item and a new but duplicate item is created.

Implications are important.

Wikidata is a wiki. The implications are quite different. In a wiki things need not be perfect, and the restrictions of a relational model are in essence recommendations only. In such a model duplicate items as described above are not a real problem, batch jobs may merge these items when they occur often enough. Processes may use arrays knowing the items it created earlier and thereby minimising the issue.

Important is that we do not blame people for what Wikidata is not and accept its limitations. Functionality like SourceMD enable what Wikidata may become; a link to all knowledge. Never mind if it is knowledge in Wikipedia articles, scholarly articles or in sources used to prove whatever point.
Thanks,
      GerardM

Sunday, March 24, 2019

#Sharing in the Sum of all #Knowledge from a @Wikimedia perspective II

When we are to share in the "sum of all knowledge" we share what we know about subjects; articles, pictures, data. We may share what knowledge we have, what others have and that is what it takes  for us to share in the sum of all knowledge. The question is why should we share all this, how to go about it and finally how will it benefit our public and how will it help us share the sum of all knowledge.

At the moment we do not really know what people are looking for. One reason is that search engines like the ones by Google, Microsoft and DuckDuckGo recommend Wikipedia articles and as a consequence the search process is hidden from us. We do not know what people really are looking for. However, some people prefer the "Wikipedia search engine" in their browser. We can do better and present more interesting search results. From a statistical point of view, we do not need big numbers to gain significant results.

When we check what the "competition" does we find their results in many tabs; "the web" and "images" are the first two. The first is text based and offers whatever there is on the web. What we will bring is whatever we and organisations we partner with, have to offer. It will be centered on subjects and its associated factoids presented in any language.

One template to consider is how Scholia presents. It differs. It depends on whether it is a publication, a university, a scholar, a paper. Large numbers make specific presentations feasible and thanks to Wikidata we know what kind of presentation fits a particular subject. A similar approach is possible for sports, politics. It takes experimentation and that is what makes it a Wiki approach.

Thanks to this subject based approach, language plays a different role. Vital is that for finding the subjects potentially differing labels are available or become available. One important difference with the Google, Microsoft or DuckDuckGo approach is that as a Wiki, we can ask people to add labels and missing statements. This will make our subject based data better understood in the languages people support. Yes, we can ask people to have a Wikimedia profile and yes, we may ask people to support us where we think people looking for information have to overcome hurdles.
Thanks,
       GerardM

Saturday, March 16, 2019

#Sharing in the Sum of all #Knowledge from a @Wikimedia perspective I

Sharing the sum of all knowledge is what we have always aimed for in our movement. In Commons we have realised a project that illustrates all Wikimedia projects and in Wikidata we have realised a project that links all Wikimedia projects and more.

When we tell the world about the most popular articles in Wikipedia, it is important to realise that we do not inform what the most popular subjects are. We could, but so far we don't. The most popular subjects is the sum of all traffic of all Wikipedia articles on the same subject. Providing this data is feasible; it is a "big data" question.

We do have accumulated data for the traffic of articles on all Wikipedias, we can link the articles to the Wikidata items. What follows is simple arithmetic. Powerful because it will show that English Wikipedia is less than fifty percent of all traffic. That will help make the existing bias for English Wikipedia and its subjects visible particularly because it will be possible to answer a question like: "What are the most popular subjects that do not have an article in English?" and compare those to popular diversity articles.

In Wikidata we know about the subjects of all Wikipedias but it too is very much a project based on English. That is a pity when Wikidata is to be the tool that helps us find what subjects people are looking for that are missing in a Wikipedia. For some there is an extension to the search functionality that helps finding information. It uses Wikidata and it supports automated descriptions.

Now consider that this tool is available on every Wikipedia. We would share more information.With some tinkering, we would know what is missing where. There are other opportunities; we could ask logged in users to help by adding labels for their language to improve Wikidata. When Wikidata does not include the missing information, we could ask them to add a Wikidata item and additional statements, a description to improve our search results.

This data approach is based on the result of a process; the negative results of our own Search and it is based on active cooperation of our users. At the same time, we accumulate negative results of search where there has been no interaction, link it to Wikidata labels and gain an understanding of the relevance of these missing articles. This fits in nicely with the marketing approach to "what it is that people want to read in a Wikipedia".
Thanks,
      GerardM

Saturday, March 09, 2019

A #marketing approach to "what it is that people want to read in a @Wikipedia"

All the time people want to read articles in a Wikipedia, articles that are not there. For some Wikipedias that is obvious because there is so little and, based on what people read in other Wikipedias, recommendations have been made suggesting what would generate new readers.This has been the approach so far; a quite reasonable approach.

This approach does not consider cultural differences, it does not consider what is topical in a given "market". To find an answer to the question: what do people want to read, there are several strategies. One is what researchers do: they ask panels, write papers and once it is done there is a position to act upon. There are drawbacks; 
  • you can only research so many Wikipedias
  • for all the other Wikipedias there is no attention
  • the composition of the panels is problematic particularly when they are self selecting
  • there are no results while the research is being done
The objective of a marketing approach is centered around two questions: 
  • what is it that people are looking for now (and cannot find) 
  • what can be done to fulfill that demand now
The data needed for this approach; negative search results. People search for subjects all the time and there are all kinds of reasons why they do not find what they are looking for.. Spelling, disambiguation and nothing to find are all perfectly fine reasons for a no show. 

The "nothing to find" scenario is obvious; when it is sought often, we want an article. Exposing a list of missing articles is one motivator for people to write. Once they have written, we do have the data of how often an article was read. When the most popular new articles of the last month are shown, it is vindication for authors to have written popular articles. It is easy, obvious and it should be part of the data Wikimedia Foundation already collects.. In this way the data is put to use. It is also quite FAIR to make this data available. 

For the "disambiguation" issue, Wikidata may come to the rescue. It knows what is there and, it is easy enough to add items with the same name for disambiguation purposes. Combine this with automated descriptions and all that is requires is a user interface to guide people to what they are looking for. When there is "only" a Wikidata item, it follows that its results feature in the "no article" category.

The "spelling" issue is just a variation on a theme. Wikidata does allow for multiple labels. The search results may use of them as well. Common spelling errors are also a big part of the problem. With a bit of ingenuity it is not much of a problem either.

Marketing this marketing approach should not be hard. It just requires people to accept what is staring them in the face. It is easy to implement, it works for all the 280+ language and it is likely to give a boost to all the other Wikipedias but also to Wikidata.
Thanks,
        GerardM

Sunday, February 17, 2019

@WikiResearch - Nihil de nobis, sine nobis

There is this wonderful notion how Research is going to tell us what to do in light of the strategic Wikimedia 2030 plans. Wonderful. There is going to be this taxonomy of the information we are missing.

Let me be clear. We do need research and the data it is based on, it is to be available to us. There is no point in a future taxonomy of missing knowledge when we have been asking for decades : "what articles are people looking for that they cannot find". If there is to be a taxonomy what else should it be based on?

When we are to fill in the gaps of what Wikipedia covers, we can stimulate more new articles by indicating what traffic they get in the first month. Stimulate our readers to learn more by showing what Wikidata has to offer and show its links to texts in other languages. It may even result in new stubs even articles in "their" language. This technology has been available for years now.

The WikiResearch is full of arguments on the importance of citations and Wikidata as the platform for all Wikipedia sources, why then are the WikiResearch papers not in Wikidata from the start. What is it, that WikiResearchers consider that Wikidata is not about them? Just as it is about any other subject Wikidata covers? What is it that makes their work less findable (FAIR) than what is known to have been published as open content by the NIH?

The point I want to make is that no matter how well intended it is what the WikiResearch aims to achieve, they lose the interest, involvement and commitment of people like me, the people they need to get the results they aim for.

Yes do research, but we should not wait for its results, we know how to stimulate people to write new articles.
Thanks,
      GerardM

Sunday, February 10, 2019

#Wikidata - A quick and dirty "HowTo" to improve exposure of a subject in Wikidata

When you want to expose a particular subject, any subject, in Wikidata. This is the quick and dirty way to expose much of what there is to know. There are a few caveats. The first is that the aim is not to be complete, the second that it is biased towards scientists who are open about their work at ORCiD.

You start with a paper, a scientist. They have an DOI / ORCiD identifier and, they may already be in Wikidata. First there is the discovery process of the available literature and the authors involved. The SourceMD tool is key; with a SPARQL query or with a QID per line, you run a process that will update publications by adding missing authors or it will add missing publications and missing authors to known publications.

When you treat this as an iterative process, more authors and publications become known. When you run the same process for (new) co-authors, more publications and authors become known that are relevant to your subject.

To review your progress, you use Scholia. it has multiple modes that help you gain an understanding of authors, papers, subjects, publications, institutions.. You will see the details evolve. NB mind the lag Wikidata takes to update its database. It is not instant gratification.

A few observations, your aim may be to be "complete" but publications are added all the time and the same is true for scientists. People increasingly turn to ORCiD for a persistent identifier for their work. The real science is in designating a subject to a paper. Arguably the subject may be in the name of the article but as an approach it is a bit coarse. I leave that to you as your involvement makes you a subject "specialist".
Thanks,
       GerardM

Tuesday, February 05, 2019

#Wikidata - Naomi Ellemers and the relevance of #Awards

In a 2016 blogpost, I mentioned the relevance of awards. At the time Professor Ellemers received an award and it was the vehicle to make that point in the story.

Today in an article in a national newspaper, Mrs Ellemers makes a strong point that the perception of awards is really poblematic. What they do is reinforce a bias that American science is superior. It leads to a perception by European students that it is the USA "where it is all happening". A perception that Mrs Ellemers argues is incorrect.

NB Mrs Ellemers is the recipient of the 2018 Career Contribution Award of the Society for Personality and Social Psychology.

Wikidata re-inforces this bias for American science by including a rating for "science awards". This rating values awards by comparing them. This rating is done by an American organisation and the whole notion behind it is suspect because the assumptions are not necessarily / not at all beneficial for the practice of science.

How to counter such a bias? As far as I am concerned there is no value in making a distinction between awards and "science awards" and biased information like this should be removed. Just consider, when European science is considered less than American science... how would  African science be rated?
Thanks,
     GerardM

Sunday, February 03, 2019

Dr Matshidiso Moeti, an exeption to my rules

When I add scientists to Wikidata, I really want something to link to, an external source like ORCID, Google Scholar Viaf.. When I link publications it is the data at ORCID I link to, I don't do manual linking.

From the sources I have read, Dr Moeti is the kind of person who deserves a Wikipedia article. Her work and the people she works with, the cases she works not only deserve recognition it is imho vitally important that they do, that you learn about them. This is why I made exceptions to my rule.

This is her Scholia, this is her Reasonator and please, take an interest.
Thanks,
      GerardM

The case for #Wikimedia Foundation as an #ORCID member organisation

The Wikimedia Foundation is a research organisation. No two ways about it; it has its own researchers that not only perform research on the Wikimedia projects and communities, they coordinate research on Wikimedia projects and communities and it produces its own publications. As such it qualifies to become an ORCID Member organisation.

The benefits are:
  • Authenticating ORCID iDs of individuals using the ORCID API to ensure that researchers are correctly identified in your systems
  • Displaying iDs to signal to researchers that your systems support the use of ORCID
  • Connecting information about affiliations and contributions to ORCID records, creating trusted assertions and enabling researchers to easily provide validated information to systems and profiles they use
  • Collecting information from ORCID records to fill in forms, save researchers time, and support research reporting
  • Synchronizing between research information systems to improve reporting speed and accuracy and reduce data entry burden for researchers and administrators alike
At this time the quality of information about Wikimedia research is hardly satisfactory. As is the standard; announcements are made about a new paper and as can be expected the paper is not in Wikidata. The three authors are not in ORCID, as is usual for people who work in the field of computing so there is no easy way to learn about their publications.

What will this achieve; it will be the Wikimedia Foundation itself that will push information about its research to ORCID and consequently at Wikidata we can easily update the latest and greatest. It is also an important step for documentation about becoming discoverable. It is one thing to publish Open Content, when it is then hard to find, it is still not FAIR and the research does not have the hoped for impact. It also removes an issue that some researchers say they face; they cannot publish about themselves on Wikimedia projects. 

Another important plus; by indicating the importance of having scholarly papers known in ORCID we help reluctant scientists understand that yes, they have a career in open source, open systems but finding their work is very much needed to be truly open.
Thanks,
       GerardM

Sunday, January 27, 2019

@Wikidata #quality - one example: Leonardo Quisumbing

Quality happens on many levels. Judge Leonardo Quisumbing passed away and a lot of well meant effort went into his Wikidata item.  The data is inconsistent with our current practice so in the Wikidata chat people were asked to help fix the data.

Judge Quisumbing held many positions, one of them was "Secretary of Labor and Employment". This is a cabinet position and it follows that Mr Quisumbing was also a "politician". It is one thing to include this position and occupation to a person, from a quality point of view it is best to include a "start date" a "replaces" an "end date" and a "replaced by". The problem: the predecessor and successor do not exist in Wikidata.

Many a secretary of Labor do have a Wikipedia article and they are included in a category. Using the "Petscan" tool it is easy to import all those mentioned. Typically the quality of the info is good however there is always the "six percent" error rate. Indeed one person was erroneously indicated as a "secretary of labor".  The problem is that people who only care about quality on the item level are really hostile to such imported issues. They are best ignored for their ignorance/arrogance.

A next level of quality is to complete the list with all missing secretaries. This can be done warts and all from the Wikipedia article. It results in a Reasonator page that includes all the red and black links of the article. Many new items are created in the process and having automated descriptions are vital in finding as many matches as possible.

Judge Quisumbing became an "Associate Justice of the Supreme Court of the Philippines" and became the senior associate justice in 2007. Adding associate judges from a category was obvious, adding senior associated judges is a task similar to secretaries of labor. However, a senior is the first among the many and consequently it requires a judgment call on how to express this.

Given that Wikidata is a wiki, you do the best you can to the level that has your interest. There is still a need to improve the Wikidata item for judge Quisumbing but that is for someone else.

Thanks,
       GerardM

Sunday, January 20, 2019

@Wikidata - #Quality in a #Wiki environment

What quality is, quality in a data environment has been studied often enough. Lots of words are spend about it but one notion is always left out. What is data quality in a Wiki environment. How does that translate to Wikidata.

First of all; Wikidata serves many purposes. The initial purpose of Wikidata was to replace the in-article "interwiki" links. They were notoriously difficult to maintain, often wrong. A single Wikidata item replaced the links for a subject in all Wikipedias and this brought stability and a high level of confidence in the result. Over time the quality of the "interwiki' links went down; there are fewer people involved adding and curating these links and it is seen as a quality issue when new items are generated for new articles; they do not have statements and are often not linked. There have been protests against these new additions.

A second purpose is the use of Wikidata statements in Wikipedia templates. Assessing data quality becomes complicated as there are micro, mesa and macro levels of quality at play. The micro level: is sufficient data available for one template in one Wikipedia article. The mesa level: is sufficient data available for one template in Wikipedia articles on the same topic. The macro level: is the same data available for all interested Wikipedias and do we have the required labels in those languages.

Quality considerations are driven by this approach. On a micro level you want all awards for a scientist to be linked on an item. On the mesa level you want all recipients of an award to be linked to their item. On the macro level you want all awards to have labels in the language of a Wikipedia and have all local considerations been met.

Standard quality considerations in a Wiki environment are not helpful; they are judgemental. People contribute to Wikidata and all have their own purposes. A Wiki is a work in progress and when quality assessments are to be performed, the question should focus on the extend a specific function is supported. What people seek in support also changes; as long as there was no article for professor Angela Byars-Winston it was fine only to know about her for one publication. Now that Jess Wade picked her for an article, it may be relevant that she is the first and so far only person known to Wikidata who was a "champion of change" and that more papers are identified for her.

Wikidata includes many references to scientific papers and authors. However, so far it serves no purpose. Allegedly there is a process underway that imports papers used as citations in the Wikipedias but it is not clear what papers are used in what Wikipedia article. So far it is a big stamp collection, a collection with a rapidly growing quality. A collection that highlights authors who are open about their work and who share the details of their work at ORCID. In effect, this data set indicates that the relevance of a scientist improves by being open.

Wikidata invites people to add/curate the data that is of interest to them. Particularly the esoteric data, data about subjects like African geography, Islamic history need a lot of tender loving care. It is where Wikidata and the large Wikipedias are weak. For as long as Wikidata is largely defined by the large Wikipedias it will reflect the same biases and these biases will be hard to assess and curate.
Thanks,
      GerardM

Tuesday, January 01, 2019

The #decline of #Wikipedia (as we know it)

Regularly, we are told about misgivings about Wikipedia. It can not stay as it is, it is in decline; it is all doom and gloom.  NB the use of the phrase "doom and gloom" increased in the 1950s.

So Wikipedia will not remain as we know it? GOOD, it forces us to think how we can improve what we have. When things are to change, what will have a healthy impact? How will we get something that serves us better in "sharing the sum of all knowledge". How will we get more people use what we have to offer and how will we entice more people to contribute to the data collection that is included in all the Wikimedia Foundation projects.

First thing; our projects need to be less US-American. For me, a POV situation I was in, was "obviously"decided in favour of only considering the USA point of view; I let it slide but went to pastures green. The money we raise is for: "keeping the servers going". An objective a bit too limited to my taste but it raises the cash. Money is mainly raised in the USA but in order to be truly global, it is better to raise more equally in every country at least for the amount it cost to serve it. Gapminder is where you may be reminded that money is everywhere. As to the servers, why have all crucial eggs in one USA basket? Given its current politics, there is indeed a potential doom and gloom scenario possible. Having them more dispersed will bring our data closer to our audience, our editors as well. Benefiting them with better performance; that is the easy win. A more complicated solution is in the implementation of the Vrije Universiteit research of a peer to peer MediaWiki.

When our projects are to be less US-American, it is important for spending to be more global too.

When today's Wikipedia practices are no longer considered to be set in stone, we can finally implement features that enable, ensure and enhance its future. First, we should be less self centric; after all there is only one sum of all knowledge and we define only a part of it. Magnus showed how to maintain lists in an efficient way and Amir added recently a "task" to Phabricator to implement proper disambiguation of "red links". We are increasingly aware, not only of the references of all Wikipedias but also of publications by scientists that enable their work to be found. Complement this with the scientific papers we publish and we improve the public relevance of scientists by making them findable, by pointing to their science.

With a changed approach at Wikipedia, we may be bold and change the outlook on what Wikipedia is there for as well. Why not make Wikipedia the gateway to information held elsewhere? Why not show a Scholia page for every scientist we know, why not offer the books at OpenLibrary or inform on the availability of books at the local library?  Why not partner with other organisation we have a shared objective in. But most importantly let us be aware that an African professor teaches in Africa and that we allow for and enable the context of our partners and volunteers.

For me there is no reason for doom and gloom as there are so many opportunities to become even more effective. With a whole new year in front of us; let us do well.
Thanks,
        GerardM