2016 SciDataCon-China: The Third China Scientific Data Conference
A Blog post by Guoqing Li (WDS Scientific Committee member)
On 25–26 August of 2016—two weeks before SciDataCon 2016 took place in Denver, USA—the Third China Scientific Data Conference was held in Shanghai, China. As can be understood from its abbreviation of SciDataCon-China, this Chinese-speaking conference is the national-level platform for communication about scientific data; just as SciDataCon, hosted by ICSU’s World Data System (ICSU-WDS) and Committee on Data for Science and Technology (CODATA), is at the international level.
2016 SciDataCon-China was co-hosted by Fudan University, which houses the first Data Science Laboratory to be set up in China. Greater than 380 experts, scholars, and students from universities, institutes, companies, and governmental agencies gathered in the Zhangjiang Campus of Fudan University to attend in excess of 20 breakout sessions over the two days. Although the number of participants was slightly fewer than the 400 who attended the Second SciDataCon-China in 2015, oral reports significantly increased to more than 160 from around 100 last time, making it the leading scientific data conference in China.
Different from the Information Sciences approach, SciDataCon-China has kept a domain-oriented emphasis as a primary principle from its beginnings. Breakout sessions mostly served the multidisciplinary community, covering such diverse fields as Materials Science, Astronomy, Space Science, Geography, Ecology, Earth Observation Science, Marine Science, Smart Cities, Precision Medicine, and Agriculture, as well as the management, analysis, and visualization of scientific Big Data.
SciDataCon-China is not only a communication platform for domain scientists and information scientists, but also a dialogue platform for scientific communities and decision-makers. Consecutive sessions on data policy, funding policy, and large-grant programme management were jointly held by the Ministry of Science and Technology and the Chinese Academy of Sciences. An important conclusion of the conference was that the opening and sharing of scientific data should be supported mainly through national finances; in particular, because scientific data can help to accelerate the construction of national innovation capacity.
A session by WDS-China has been a regular and popular feature of each SciDataCon-China since its conception. On this occasion, greater than 40 experts from 7 Chinese WDS Members were at the WDS-China session alongside numerous attendees from local data centres. Discussions and reports focussed on the topics of the maintenance and future development of Chinese WDS Member Organizations, the sustainability of national scientific data centres, creating a uniform metadata service within WDS-China, the long-term preservation of published data, and so on.
Under the oversight of the WDS Scientific Committee, and supported by the WDS International Programme Office, WDS-China and WDS-Japan are now working together to realize the inaugural WDS Asia-Pacific Symposium: a regional communication platform for scientific data. Thus, there will be a seamless transition of WDS communications from the national, through the regional, to the international level.
Researchers who specialize in a particular Earth Science discipline (seismology, geomagnetism, gravimetry, geochemistry, geology, etc.) cannot fully describe the history and crustal structure of a region of the globe using ONLY their specific research field. They often need to consult a large number of references and databases from other research domains. Interdisciplinary studies are still hampered by the necessity for researchers to document themselves effectively with many ‘external/foreign’ contributions, and to have colleagues in these fields who are willing to collaborate.
Of course, many efforts have been made to group datasets, mainly by discipline, and make them available to the greatest number in a trusted database. However, interdisciplinary approaches still remain a matter of exception. Good ideas are sometimes dismissed simply because of life: difficulty in easily finding a reliable data source understandable to a non-specialist, trouble in speaking the same language as the scientific colleague of the other discipline, lack of time,…
The greatest advances in Earth sciences were made using transdisciplinary collaborations. We say often that ‘the data noise of one proves to be useful information to another’ and vice versa. This is true even within the same discipline; indeed in geomagnetism: for one magnetic field measurement, the inner part interests the main-field modeller, while its ‘noise’ contains the ionospheric field studied by an ionospheric physicist.
Over the last decades, considerable advances in information technology have made an integrated approach possible, easing access to the tremendous amount of data and products available across the Earth Sciences and related fields. Large multidisciplinary projects are initiated to facilitate integrated use of data, data products, and tools from distributed research infrastructures for Solid-Earth science in Europe.
In this matter, EPOS—the European Plate Observing System1—is currently one of the most exciting under-development, long-term integration project in Europe. EPOS strategy is not to erase all that was previously done, but to integrate existing national or transnational structures (e.g., seismic and magnetic permanent monitoring networks, and analytical laboratories) and to develop a new interoperabillity layer that will be seen as a common interface.
Long-standing existing structures (National, European, or International services and data centres), together with newly developed databases (for less centralized/organized disciplines), will be virtually gathered into a central hub of which the key functions will be: an Application Programming Interface, a metadata catalogue, a system manager, and services that will enable data discovery, interactions with users, as well as access, download and integrate data2.
Data will be made available from the Solid-Earth Science disciplines that each community deals with, such as seismology, geomagnetism, geodesy, volcanology, geology and surface dynamics, analytical and experimental laboratory research, rock physics and petrology, and satellite information. Available data will be quality controlled according to the appropriate standards as defined by each of the disciplinary data providers.
For pre-existing entities, their visibility will be enhanced. For new structures, their creation will help the community to consolidate scattered data that are hidden and distribute them in a uniform database. For researchers in the Solid-Earth Sciences, EPOS will facilitate innovative cross-disciplinary approaches for a better understanding of the physical processes and the driving forces involved (a seismologist will get access to trusted magnetic anomaly maps; a gravimetrician will be able to use reliable strain rate maps from the Global Navigation Satellite System community to compare with their own results). From a societal point of view, EPOS will enable scientists to better inform governments and society on natural hazards, such as earthquakes, volcanic events, tsunamis, and major land movements.
EPOS is in its implementation phase. By 2018, EPOS is expected to be a legal entity: the EPOS ERIC (European Research Infrastructure Consortium).
Hello! I am a Professor in a School of Public Health who directs an Institute in Risk Analyses and Risk Communication, and in that role I am frequently asked questions on current health risks. The recent Zika epidemic is a significant example of such a request, and provides an opportunity to illustrate use of databases to answer risk assessment questions for this emergent issue.
In risk assessment for Zika virus, we are interested in identifying specific health impacts—including potential birth defects—that may be associated with exposure. We are also interested in the potency of the virus, duration of infection, and whether the duration of the infection relates to the severity of the health impacts. In this post, we pose the question: what databases and data sources exist for us to examine this epidemic but also to be prepared for potential future epidemics? I share with you example databases that I used to answer these questions in a recent journal club. I have also included a series of comments and conclusions about the utility of these databases for risk assessment questions.
Background on Zika Virus
I’d like to start by providing a little background on Zika virus, as one critical step in risk assessment is hazard identification and characterization. Though Zika virus was first discovered in 1947 in Africa, the first large epidemic was not reported until 2007 in the Pacific Island of Yap (Al- Qahtani et al. 2016). Since then, outbreaks have been reported in French Polynesia (2013), and Brazil and surrounding countries (Chang et al. 2016). The first case of Zika virus in Brazil was reported in May of 2015. Currently, 30 countries in the Americas have reported active cases of Zika virus. Though Zika is usually transmitted through the bite of a mosquito from the Aedes genera (Aedes albopictus and Aedes aegypti), it can also be spread through sexual activities and intravenous infection, such as blood transfusions. For most healthy individuals, infection can lead to mild flu-like symptoms or even be asymptomatic. However, infection (both symptomatic and asymptomatic) during pregnancy can lead to irreparable birth defects that severely impair child development (Kleber de Oliveira et al. 2016).
The most common birth defect associated with Zika virus exposure during pregnancy is microcephaly (Rasmussen et al. 2016). The basic definition of microcephaly is 'the clinical finding of a small head compared with infants of the same sex and gestational age' (CDC 2016). Problematically, there is no universally accepted definition of microcephaly; thus, when tracking cases of microcephaly and Zika viruses across healthcare providers, provinces, states, countries, and regions, the criteria employed can be drastically different. Inconsistencies in data collection techniques frequently limit the ability of Public Health professionals to accurately identify and predict Zika-induced microcephaly cases. To add further complications, microcephaly is not unique to Zika infection, but can be caused by a number of environmental and viral exposures, such as toxicoplasmosis, rubella, cytomegalovirus, herpes, HIV, Syphilis, mercury, alcohol, radiation, as well as genetic and maternal health conditions including poorly controlled material diabetes and hyperphenylalaninemia (CDC 2016).
Figure 1: Visual representation of microcephaly (CDC 2016)
This fast spreading epidemic demonstrates the need for access to global databases tracking the spread of mosquito species, infections, and birth defects, both under current and future climate conditions. Next, I will describe databases and data sources relevant to tackling this multifaceted global health risk.
Mosquitos: Because Zika virus is a vector-born infection, tracking the distribution of both Aedes albopictus and Aedes aegypti under current and future climate conditions will be critical to combating seasonal outbreaks, preventing the geographical spread of current outbreaks, and developing long-term strategic interventions to interrupt the vector-host pathway. HealthMap provides an excellent resource for tracking and predicting the spread of Zika virus with up-to-date interactive maps that show the distributions of both mosquito species and Zika infections on a global scale. Through an automated system, HealthMap updates distributions on a daily basis and provides convenient interfaces in nine different languages. Because the Zika epidemic has spread at such an alarming rate, the availability of data in real-time is critical. In addition to Zika cases, HealthMap also tracks Yellow Fever, West Nile Virus, and Chikungunya, which are related to Zika virus. By co-tracking these better characterized viruses, we may be able to translate lessons learned into Zika research and prevention. The Centers for Disease Control and Prevention (CDC) also tracks mosquito distributions in the United States. These ranges show that while Aedes aegypti distributions are primarily in the southern region of the United States, the Aedes Allopictus distribution reaches as far north as New Hampshire, and extends into the mid-west, reaching Minnesota. While this does not mean that Zika will spread in all of these areas, knowing mosquito distribution patterns can help communities prepare and mitigate risks.
Figure 2: HealthMap shows the distribution of Zika (purple dots) along with the distribution of Aedes aegypti, available here.
As the global climate changes, mosquito distributions are predicted to expand. Many options exist for predicting mosquito distribution changes alongside increased temperatures and changes in global precipitation patterns (see resources below). Many of these programs have been optimized to describe the changes in malaria infections (e.g., Medlock et al. 2015). By translating lessons learned from malaria surveillance programs that predict changes in disease related to climate change, this will be relevant for Zika epidemic prediction.
Zika Infections: Both the World Health Organization (WHO) and CDC are actively tracking global cases of Zika virus. However, because infection can be mild or asymptomatic, it is expected that these may be underestimates. Additionally, Zika infections occurring in underserved communities may go unreported due to lack of access to healthcare.
Figure 3: Distribution of Zika infections in the United States from CDC found here.
Birth Defects Registries: Both CDC and WHO track incidents of microcephaly at national and global scales, respectively. Generally, birth defects are identified by active or passive surveillance systems. Under active surveillance, Public Health or healthcare professionals seek out birth defect information. For example, the expert goes to hospitals and reviews medical reports to find babies with birth defects. Passive surveillance, on the other hand, relies on doctors or hospitals to send reports to the Public Health Department responsible for tracking birth defects. In this model, doctors and healthcare providers must be able to accurately diagnosis birth defects and report them to the proper Public Health Department. Hybrid approaches are also used, in which the surveillance is passive; however, Public Health professionals will follow-up to confirm birth defect reports. For microcephaly, it is particularly complicated due to discrepancies in how the condition is diagnosed. Comparing countries with active and passive surveillance systems is complex and often introduces biases into the analyses. Additionally, depending on the legal and healthcare environment, women carrying fetuses with known birth defects may terminate their pregnancies before a birth defect can be reported, leading to an underestimation of birth defects. These complexities make international comparisons of birth defects complicated.
Dysmorphology: Efforts to standardize the definitions of congenital abnormalities, including microcephaly, are important in harmonizing data collection at national and international levels. CDC uses the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) ontology as a controlled vocabulary for describing congenital abnormalities. Additionally, SNOMED CT has compiled an extensive database of known causes of microcephaly, including genetic abnormalities.
Databases were available that answered all of these questions, and which provided additional details on potential challenges related to data collection. However, separate databases need to be consulted to track microcephaly and Zika cases alongside mosquito populations under current and projected climate scenarios. Some of these databases are automatically updated consistently; however, others have to be manually updated and can become out-of-date relatively quickly. Current projections are what are being accessed to answer questions about the global and local risks associated with the upcoming Olympic Games.
The available databases enabled decision-makers to craft location-specific risk communication advice and also to make predictions of vector spread. As with many emerging risks, more information is always needed, and hence the frequency of database updates directly correlated with the increasing frequency of revised messages. Information sources differed in detail and were dynamic. In particular, with birth defects, getting the message wrong or having access to inaccurate data can result in serious healthcare actions. Most of the databases we accessed to make these assessments were government- and/or agency-based databases, best used for population level predictions rather than for use in individual patient-based decisions. At the population level, these databases were exceptionally helpful.
All in all, we found a wide variety of databases available that are relevant to understanding and predicting risks associated with Zika virus. Some weaknesses include: lack of international standards for diagnosing microcephaly, and difficulties in quantifying prevalence of Zika virus in rural and underserved communities; infrequently updated databases; and 'lack of one-stop shopping'. However, there are many promising tools such as HealthMap, which contains information on both mosquitos and Zika cases and is frequently updated.
Special thanks for M. Smith and D. Pyle with the Institute for Risk Analysis and Risk Communication for their contributions to this blog post.
Climate change models for mosquito spread: – Medlock, J. M. and S. A. Leach (2015) 'Effect of climate change on vector-borne disease risk in the UK.' The Lancet Infectious Diseases 15(6): 721-730. – Paz, S. and J. C. Semenza (2016) 'El Niño and climate change-contributing factors in the dispersal of Zika virus in the Americas?' The Lancet 387(10020): 745. – Sucaet, Y., J. V. Hemert, B. Tucker and L. Bartholomay (2008). 'A Web-based Relational Database for Monitoring and Analyzing Mosquito Population Dynamics.' Journal of Medical Entomology 45(4): 775-784. – Vector Map
The world as a whole faces many more changes in its climate than it did in past decades. Such global changes have direct impacts on both the social and economic aspects of our lives (human activity), as well as on the environment. To comprehend the complexity of these phenomena and their effects on the aforementioned sectors requires in-depth investigations. To this end, environmental observations can supply information about past climates while providing benchmarks for comparison with future changes. The observations hence serve as a basis for assessing potential impacts and for planning adaptation measures and mitigation policies against them.
The Institute of Research for Development (IRD, France) has been involved for many years in observing the environment in intertropical zones. The observation systems it has put in place are an integral part of the research carried out by IRD and its partners in developing countries. The ongoing operation of these systems is essential to gain an understanding, over a sufficiently long period, of variations in both environmental processes and major cycles within the current context of climate change and accelerated development of human activity.
The observatories are jointly operated and managed with partners from the South and the North, which promotes North–South and South–South exchanges. They back up data and results, make them available to scientific communities, and disseminate them to a wider audience. These actions thus build on and complete environmental monitoring efforts carried out in each country by local organizations or inter-governmental entities—which include training and technology transfer initiatives and an aim to foster academic training in topic-based schools.
Together, with standard observational procedures and certified data, we can overcome these global issues.
Firstly, most of us agreed that being able to reproduce the result of queries (and potentially other transformations or processes) applied to data or subsets of the data was the hardest of the guidelines to implement.
One can deal with this by keeping archived copies of all such query and transformation results (painless to implement, but potentially devastating from a storage provisioning perspective), or one could opt to store the query and transformation instructions themselves, with a view to reproducing the query or transformation result at some point in the future.
This second option equates to always starting with base ingredients (egg yolks, lemon juice, butter, and maybe mustard or cayenne) and to store this with a recipe (in this case for Hollandaise Sauce). This option is also painless to implement, until there is a change in the underlying database schema, code, or both—in which case one will have to (potentially almost ad infinitum) maintain backward compatibility so that historical operations continue to work, or maintain working copies of all historical releases for the purpose of reproducing a query or transformation result at some point in the future. Clearly this is not very practical.
By the way, there were some excellent ideas on how to record recipes systematically: Lesley Wyborn presented work on defining an ontology whereby queries and transformations could be documented as an automated script, and Edzer Pebesma and colleagues are conceiving an algebra for spatial operations with much the same objective in mind.
This approach, of course, requires an additional consensus: at what point do we store results as a new dataset instead of executing a potentially longer and longer list of processes on original data? There must be some value to buying Hollandaise Sauce off the shelf for our Eggs Benedict—at least some of the time.
This assertion set me thinking about the process of reproducing results in the new world of data-intensive science, a world in which code and systems are increasingly distributed, reliant on external vocabularies, lookups, services, and libraries (that may be themselves referenced by persistent identifiers). None of these resources, which may have a significant outcome on the result of a process should they change, are under the control of the code running in my environment. Which brings us to Claerbout’s Principle:
"The scholarship does not only consist of theorems and proofs but also (and perhaps even more important) of data, computer code and a runtime environment which provides readers with the possibility to reproduce all tables and figures in an article."
Easier said than done. We can, of course (as we should in a world of formal systems engineering) insist on proper configuration control and versioning of all components, internal and external, but I am not convinced that the research community is ready for this level of maturity—typically reserved for moon rockets and defense procurement, with orders of magnitude in additional costs. Perhaps more importantly, the scientists writing code are not going to invest time and effort to document, version, and package their code to a standard that supports reproducibility. Hence, the code that we use to transform our data, whether we like it or not, will not automatically produce the same result at some unspecified point in the future, and much more so if it has external web-based dependencies (which, in turn, may also have external dependencies). There is some utility in packaging entire runtime environments (much in the way that one could persist the result of a query or transformation), but this does not solve the problem of external dependencies.
Which raises an interesting dilemma: in the world of linked open data, the semantic web, and open distributed processing, the state of the web at any point in time cannot be reproduced ever again—which may create significant issues for reproducible science if it uses any form of distributed code.
Not only that! As we rely more and more on processing enormous volumes of data by digital means, we will depend more and more on artificial intelligence, machine learning, and automated research. As the body of knowledge available to automated agents changes, so presumably, will their conclusions and inferences.
So...we need a new consensus on what science means in the era of data-intensive, increasingly automated science: our rules, notions, and paradigms will soon be outdated.
Fitting subject for an RDA Interest Group, I would think.
It is often said—disparagingly—that America’s culture is a consumer culture. Although it may be true that America’s consumerism is problematic, not least for the planet, the flip side is how consumer culture drives a service mentality in businesses and government. The old adage that “the customer is king” does motivate US government agencies and government-supported centers, including NASA’s Distributed Active Archive Centers (DAACs), to innovate and improve services in response to user feedback and evolving user needs.
Since 2004, NASA’s Earth Science Data and Information System (ESDIS) Project [WDS Network Member] has commissioned the CFI Group to conduct an annual customer satisfaction survey of users of Earth Observing System Data and Information System (EOSDIS) data and services available through the twelve DAACs. The American Customer Satisfaction Index (ACSI) is a uniform, cross-industry measure of satisfaction with goods and services available to US consumers, including both the private and public sectors. The ACSI represents an important source of information on user satisfaction and needs that feeds into DAAC operations and evolution. This may hold some lessons for WDS data services more broadly as they seek feedback from their users, and endeavor to expand their user bases and justify funding support.
The ACSI survey invitation is sent to anyone who has registered to download data from the NASA DAACs. In the past registration was ad hoc, and each DAAC had its own system. In early 2015, ESDIS began implementing a uniform user registration system called EarthData Login that requires that users establish a free account before they can access datasets. Accounts are associated with a given DAAC, but they allow access to data across all the DAACs. All those who register are sent invitations to fill out the ACSI survey. Response rates vary from a few percent among most DAACs, to as high as 38% for the Land Processes DAAC [WDS Regular Member] (which also has the highest number of respondents at just over 2,000).
In 2015, the overall EOSDIS ACSI was 77 out of 100, which is better then the overall government and National ACSI scores for 2015 (64 and 74, respectively), but lower than the National Weather Service (80). This score is based on users’ overall assessment of satisfaction with each data center based on expectations and comparison with an “ideal” data center. The ACSI model provided by the CFI Group also assesses specific “drivers” of user satisfaction—customer support, product search, product selection and order, product documentation, product quality, and data delivery—and their relative importance to the overall ACSI score. This allows the DAACs to identify areas where improvement is needed and should have the most impact on overall satisfaction.
The ACSI enables the EOSDIS to assess changes from year to year. For example, from 2014 to 2015 customer support went from 89 to 86, with drops in professionalism, technical knowledge, helpfulness in correcting a problem, and timeliness of response (all statistically significant). Many changes likely reflect the fact that the pool of survey respondents changes over time, as do their expectations, rather than actual drops in service provision. But for individual DAACs, declining scores in certain areas, in combination with free-text responses to open-ended questions, can help to flag issues that are in need of attention.
For example, the ACSI scores and free-text responses to open-ended questions helped our DAAC—the Socioeconomic Data and Applications Center (SEDAC) [WDS Regular Member]—in undertaking a major website overhaul in 2011. From a disparate set of pages with different designs, we created a coherent site with consistent navigation. The resulting site was evaluated very favorably by Blink UX, a user experience evaluation firm that reviewed all of the DAAC websites. Deficiencies in data documentation for selected datasets have also been pointed out by survey respondents, and we are now reviewing our guidelines for documentation to ensure that all datasets meet a minimum standard. Some users indicated difficulty in finding the latest dataset releases, so we are developing an email alert system for new data releases.
At the Alaska Satellite Facility (ASF) DAAC [WDS Regular Member], the ACSI results have been very helpful in getting a sense of how people are using ASF DAAC data and services. The free-text responses to questions regarding new data, services, search capabilities, and data formats are particularly informative. For example, one user suggested that it would be useful to have quick access to Synthetic Aperture Radar data for specific regions in the world for disaster response. A data feed was developed after the recent Nepal earthquake that notified users of any new Sentinel-1A data received at ASF DAAC for that specific area. This data feed quickly provided additional data for disaster responders and researchers studying this event. Data feeds are now available for several seismically active areas of the world that have been designated by the scientific community (i.e., Supersites).
Overall, the strong EOSDIS ACSI scores have been important in objectively demonstrating and documenting the continuing value of EOSDIS and the individual DAACs to the broad user community. The annual score is reported as one of NASA’s annual performance metrics, supporting NASA’s goal to provide results-driven management focused on optimizing value to the American public.
Although surveys can be costly, and the response rates low, WDS Members would do well to consider periodic surveys of users. We find that highly motivated users do respond and provide really useful suggestions, especially if they find that their responses actually lead to tangible changes in their user experience. While annual surveys may be more than is needed, surveys every 2–3 years could provide your data service with valuable feedback on its content and services. And of course, none of this should supplant other mechanisms for gathering user feedback, such as help desk software (e.g., UserVoice used by SEDAC or Kayako used by NASA’s EarthData), email, and telephone helplines. Through these multiple mechanisms, our user communities can help drive significant improvements in the services offered by WDS Members and the successful use of our valuable data by growing numbers of users.