Personal tools

WDS International Technology Office to Open in Canada

WDS International Technology Office to Open in Canada

We are pleased to announce that following an international competitive call , the WDS Scientific Committee (WDS-SC) selected a Canadian consortium formed by three WDS Regular Members— Ocean Networks Canada (ONC) at the University of Victoria, the Canadian Astronomy Data Centre  (CADC) of the National Research Council (NRC) in Victoria, and the Canadian Cryospheric Information Network/Polar ...

European Members of ICSU Release Statement on 'Open Data in Science in Europe'

European Members of the International Council for Science (ICSU) have released a statement based on the workshop 'Open Data in Science: Challenges and Opportunities for Europe' organized in partnership with the All European Academies in January 2018 in Brussels: Open Data in Science in Europe This workshop consulted 80 representatives from across Europe of science academies, ...

SciDataCon 2018 Call for Abstracts – WDS Sessions

SciDataCon 2018 Call for Abstracts – WDS Sessions

The Call for Abstracts for SciDataCon 2018 is now open until 31 May 2018 (extended from 30 April). Abstracts for papers or posters  are invited for sessions at SciDataCon 2018, part of International Data Week (5–8 November 2018; Gaborone, Botswana).  Abstracts must be submitted to an accepted session or to the General Submission, and will be reviewed for quality and appropriateness to be ...

PSMSL Celebrates 85th Anniversary with Sea-Level Change Conference

PSMSL Celebrates 85th Anniversary with Sea-Level Change Conference

To celebrate the 85th anniversary of the Permanent Service for Mean Sea Level (PSMSL; WDS Regular Member), it will host Sea Level Futures : a conference on regional and global sea-level change, the latest sea-level technology, and the future of sea-level research at both the UK National Oceanography Centre and Liverpool University on 2–4 July 2018.  For further information, including an ...

More »

CASEarth: A Big Earth Data Project Launched by Chinese Academy of Sciences

Guoqing LiA Blog post by Guoqing Li (WDS Scientific Committee member)

On 12 February 2018, the Chinese Academy of Sciences (CAS) announced the launch of its 'Big Earth Data Science Engineering' (CASEarth) project in Beijing. With a total funding of approximately RMB 1.76 billion (USD 279 million), and greater than 1,200 scientists from 130 institutions around the world involved, this five-year project (2018–2022) is headed by Huadong Guo, an academician of the CAS Institute of Remote Sensing and Digital Earth.

The overall objective of CASEarth is to establish an International Centre of Big Earth Data Science with the mission of

1. Building the world’s leading Big Earth Data infrastructure to overcome the bottlenecks of data access and sharing.
2. Developing a world-class Big Earth Data platform to drive the discipline.
3. Constructing a decision-support system to serve high-level government authorities and solve multiple issues.

This joint centre will provide a very large volume of data, comprehensively enhancing national technological innovation, scientific discovery, macro-decision making, and public knowledge dissemination, as well as other significant outputs.

The framework of the Big Earth Data Science Engineering Project (CASEarth)Framework of the Big Earth Data Science Engineering Project (CASEarth)

To achieve its goals, CASEarth consists of the eight Work Packages (research components) to help it achieve technological advances and obtain innovative results, paying special attention to data sharing and encouraging interested scholars to rely on the platform to carry out research. For example, CASEarth Small Satellites will provide continuous observing data from space according to the missions of the project; the Big Data and Cloud Service Platform will build a cloud-based Big Data information infrastructure, and processing engines with 50 PB storage and 2 Pflops data-intensive computing resources; and the Digital Earth Science Platform will integrate multidisciplinary data and information into a visualization facility to meet the requirements of decision-making and application in areas such as Biodiversity and Ecological Security, Three-Dimensional Information Ocean, and Spatiotemporal Three-Pole Environment. In particular, it will provide a comprehensive display and dynamic simulation for sustainable development processes and ecological conditions along the Silk Belt and Road, and provide accurate evaluation and decision support for Beautiful China’s sustainable development.

In summary, CASEarth is expected to increase open data and data sharing; realize comprehensive assimilation of data, models, and services in the fields of resources, the environment, biology, and ecology; and build platforms with global influence. CASEarth will explore a new paradigm of scientific discovery involving Big Data-driven multidisciplinary integration and worldwide collaboration, and will constitute a major breakthrough in Earth Systems Science, Life Sciences, and related disciplines.

For more information, please look at the following references:
 – A paper about Big Earth Data:
https://www.tandfonline.com/doi/full/10.1080/20964471.2017.1403062
 – The Big Earth Data Journal: https://www.tandfonline.com/toc/tbed20/current

A Systematic IT Solution for Culture Collections Community to Explore and Utilize Preserved Microbial Resources Worldwide

Nominate an Early Career Researcher for the 2018 WDS Data Stewardship Award here (Deadline: 21 May 2018)

A Blog post by Linhuan Wu (2017 WDS Data Stewardship Award Winner)

Demands on Information technology from the culture collections

Global culture collections play a crucial role in long-term and stable preservation of microbial resources, and provide authentic reference materials for scientists and industries. With the development of modern biotechnology, available knowledge on a certain microbial species is growing unprecedentedly. Especially, since the advent of high-throughput sequencing technology, enormous sequence data have been accumulated and are increasing exponentially, making data processing and analysis capacity indispensable to microbiological and biotechnological research. Culture collections must not only preserve and provide authentic microbial materials, but also function as data and information repositories to serve academia, industry and the public.

Figure 1. A system-level overview of the WDCM databases.

There is a gap between the capacity of culture collections and the needs of their potential users for a stable and efficient data management system, as well as advanced information services. Ideally, curators and scientists from culture collections now need to not only share data but also design and implement data platforms to meet the changing requirements of the microbial community. However, not all culture collections can afford the infrastructure and personnel to maintain their own databases and ensure a high-level of data quality, let alone provide additional services such as visualization, statistical, and other analytical tools to enhance understanding and utilization of the microbial resources they have preserved.

The WFCC-MIRCEN World Data Centre for Microorganisms (WDCM) was established 50 years ago, and became a WDS Regular Member in 2013. The longstanding aim of WDCM is to provide integrated information services for culture collections and microbiologists all over the world. To clear the roadblocks in utilization of information technology and capacity building in culture collections, WDCM has constructed its own informative system, and a comprehensive data platform with several constituent databases (Fig. 1).

Figure 2. Web interface of CCINFO database.

Culture Collections Information Worldwide (CCINFO), which serves as a metadata recorder, has collected detailed information of 745 culture collections in 76 countries and regions up to January, 2018 (Fig. 2). In addition, WDCM has assign a unique identifier for each culture collection registered in CCINFO to facilitate further data sharing.

To help culture collections establish an online catalogue and further digitalize information of microbial resources, WDCM launched the Global Catalogue of Microorganisms (GCM) project in 2012 and tried to build up a system with fast, accurate, and convenient data accessibility gradually. The current version of the GCM database has recorded 403,572 strains from 118 collections in 46 countries and regions, and performs automatic data quality control, including validation of the data format and contents—for example checking species names with taxonomic databases.

WDCM also developed the Analyzer of Bioresource Citation (ABC), a data mining tool to extract information from public sources such as PubMed, World Intellectual Property Organization, Genomes OnLine Database, and the National Center for Biotechnology Information Nucleotide database. By automatically linking catalogue information submitted from each culture collection with the data mining results of ABC, this greatly enriches the information that users of GCM can acquire in one search.

General Standards to Promote Data Sharing

At present, a major problem impeding the exploitation of microbial resources by academia and bioindustries is the low efficiency of data sharing. Since most of the culture collections tend to use different data forms for data management and publication, users have to tap into a huge body of data to find valuable information, let alone obtain suitable microbial materials efficiently, which has resulted in considerable wastes of time and money.

Although many international organizations and initiatives have implemented their own data standards or recommended datasets for microbial resources data management—for instance, Darwin Core and the Organization for Economic Cooperation and Development's Best Practice Guidelines for Biological Resource Centres—there is still a long way to go before realizing efficient data exchange, sharing, and integration globally.

WDCM has established minimum and recommended datasets and has implemented them in the database management system of GCM to ensure uniform data formats and data fields. WDCM has also committed to developing a standard under International Organization for Standardization Technical Committee 276 – Biotechnology; namely, AWI 21710, 'specification on data management and publication in microbial resource centres (mBRCs)'. This work is trying to improve data traceability of microbial resources preserved in different culture collections by normalizing local identifiers and mapping the ones already in use, as well as give recommendations for popularizing a machine-readable persistent identifier system to enable data integration.

Conclusion

The WDCM database is now using a centralized model for data integration. Future developments such as 'Big Data' technologies, including Semantic Web or Linked Open Data, will enable the system to provide more flexible data integration from broader data sources. Linking WDCM strain data to, for example, environmental, chemistry, and research literature datasets can add value to data mining and help in targeting microorganisms as potential sources of new drugs or industrial products. Linking microbial strain data to climate, agriculture, and environmental data can also provide tools for climate-smart agriculture and food security. WDCM will work with Research Infrastructures, publishers, research funders, data holders, and individual collections and scientists to ensure data interoperability and the provision of enhanced tools for research and development.

What Every Early Career Researcher Should Know about Research Data Management

A WDS-SC Opinion Piece by Alex de Sherbinin, Elaine Faustman, & Rorie Edmunds

 

(Download as PDF)

Introduction

The Scientific Committee of the ICSU World Data System (WDS-SC) believes that all Early Career Researchers (ECRs) require a basic set of data-related skills. The following presents essential areas of Research Data Management (RDM) that are relevant to budding scientists and that cover the range of issues they are likely to encounter as they collect, analyze, and manage data over the course of their careers. It has been formulated with the assumption that ECRs play an important role for future data sharing, and must take an interest in data stewardship and best practices in data management, including how to make data openly accessible and reusable.

The Essentials of RDM

Open Data. Almost all science funding agencies require that research results, including data, be made publicly available. Journals, too, are requesting that authors of scientific articles post their data, and even the code used to generate results. Data sharing and open data are important to the advancement of science, and data reuse has resulted in important scientific discoveries. ECRs need to be familiar with the FAIR principles—that data need to be Findable, Accessible, Interoperable, and Reusable—and work towards data sharing and research transparency in their own work.

Big Data. The term ‘Big Data’ arose to describe the Volume, Variety, and Velocity (the three Vs) of data being generated almost continuously by a range of sciences, from Biomedical to Earth Sciences. An ECR should have an understanding of what is meant by Big Data, and how they are increasingly important to a variety of scientific fields. Familiarity with tools and approaches to analyzing Big Data is also an important requisite for career advancement.

Definitions and Jargon. An ECR must know some of the terminology in the data arena, such as ‘ontologies’, ‘informatics’, ‘metadata’, and ‘knowledge networks’. A critical element for data sharing is common definitions, and particular attention should be paid to understanding ontologies, thesauri, and controlled vocabularies: what ontologies are, where to find them, and how to create them, as well as ways for integrating ontologies and using them to support metadata and data disambiguation efforts.

Funder Requirements and Writing Data Management Plans (DMPs). Funders increasingly require that scientists articulate through a DMP how they will ensure the open availability of their data for the long term at the onset of a project. An ECR should know how to thoughtfully prepare a DMP that will also increase the odds of them obtaining funding. Awareness of the domain-specific data repositories where their data may be archived is also important (see below). A conceptually ideal DMP is extensible, interoperable, and machine readable, and an ECR must understand why these aspects are needed and how to address them.

Data Organization and Storage. Organization and long-term preservation of data is an increasingly daunting task. An ECR should know methods of sustainability to ensure the continuance of databases as they begin to generate data. Documenting versioning, choice of technology and standards, and archiving also need to be understood. The principle that data have several end uses throughout their lifecycle—each with its own requirements—is fundamental within this, and the concepts of ‘Analysis-ready’ and ‘Publication-ready’ (data with quality assurance, citation, and metadata) data should be familiar to an ECR.

Metadata Formats, Usage and Data Discovery. Metadata are critical for data discovery and reuse, and are the bread-and-butter of catalogue services. Metadata standards are strongly format and discipline dependent, but common elements are increasingly captured in efforts by DataCite, DCAT, and others. The International Organization for Standardization (ISO) has also developed a number of domain specific standards, such as ISO-19115 for geospatial information. An ECR should recognize the importance of proper metadata development, and be aware of a number of the standards that are available.

Data Documentation. To be of use to other researchers, data need to be carefully documented: to describe how they were developed, their limitations, and to what use they may be put. Incomplete and cursory documentation often renders data unfit for future use. An ECR should have knowledge of the different approaches taken to data documentation in various fields of science, as well as of the increasingly important practice of properly referencing protocols, methods, and samples.

Data Formats and Interoperability. Data formats and applicable standards for data and metadata are largely dependent on the scientific discipline and the type of software used. There are data formats that are common across disciplines, but this is not the norm. An ECR should support open formats and well-entrenched standardized services (e.g., CSV files, DDI, OGC services, and OPENDaP, to name a few), and having an overview of their scope is a useful starting point for an ECR to make appropriate choices. For a discussion of data standards and interoperability in the health domain, visit AHIMA.

Choosing a Long-term Repository. An ECR must have an understanding of not only which disciplinary repositories are best suited to the domain in which they are working, but also the ‘trustworthiness’ of these data repositories, and how this is underpinned by a hierarchy of certification standards (e.g., the CoreTrustSeal). By examining the strengths and weaknesses of different repositories in terms of data access, documentation, and so on, it helps an ECR to conceptualize what makes for a successful data service.

Standardization, Licences, and Intellectual Property Rights. To aid in their reuse, data should ideally be made available in standardized schema and using standardized services. Each ‘data family’ has its own set of such standards, and an ECR should know which are relevant to their discipline. Moreover, with Open Data an increasing norm in the scientific community (see above), an ECR should be aware of the different types of licensing and copyright arrangements under which data are often disseminated, in addition to the importance of machine-readable licensing arrangements.

Data Ethics. While primarily salient for ECRs working in the Health Sciences, Social Sciences, and Humanities, ethical issues that arise throughout the data management lifecycle should be a topic of broad interest to all researchers likely to engage with disclosive data (e.g., research on rare biodiversity, where there may be commercial interests in their exploitation). Areas that an ECR should have knowledge about include, but are not limited to: data ownership and stewardship, handling sensitive data, consent, privacy and confidentiality, reconciling ethical and legal norms impacting data sharing and exportation, constructing equitable partnerships and data sharing agreements, and navigating the complexity of ethics review.

Data Publication, Citation, and Persistent Identifiers. An increasing number of data journals, such as the Nature Group’s Scientific Data, are now available for the publication of datasets. In addition, proper citation of data using persistent identifiers is becoming the norm in the scientific community. An ECR should be aware of the approaches to data publication and citation and the importance of doing these properly.

Research Translation and Societal Benefits. To facilitate use of data collected and stored within archives, an increasingly wide range of software has been developed for decision analytics and support. In addition, there is a great deal of work on integrating data across disciplines to support new discoveries. An ECR should understand the value that well-curated and sustained data management provides to the scientific community and larger society, and have some understanding of data indicators, decision-analysis techniques; and the graphical interfaces that can simplify exchanges. Linked ontologies and robust metadata can facilitate these possibilities.

Citizen Science and Crowdsourced Scientific Data. Citizen science and crowdsourced data have already proven to be of tremendous scientific value. However, the modest budgets of these initiatives typically mean that systems are lacking for the curation and long-term stewardship of their data. An ECR should know what citizen science is, and how to design an initiative that engages citizens in improving scientific data collection and use: addressing issues of data stewardship, validation, confidentiality, dissemination, and licensing from the beginning. SciStarter provides a good introduction to Citizen Science, and an example of pointers for the design of citizen science can be found at the Cornell Lab of Ornithology.

When the Shift from Analogue to Digital Data Occurred: A Case in Geomagnetic Data Services

Toshihiko IyemoriA Blog post by Toshihiko Iyemori (WDS Scientific Committee Member)

The World Data Centre for Geomagnetism, Kyoto (WDS Regular Member) has been collecting worldwide geomagnetic observation data under the ICSU World Data Centre (WDC) system/World Data System since 1957, collaborating with other WDCs for Geomagnetism in the United States (World Data Service for Geophysics), Russia (WDC - Solar-Terrestrial Physics, Moscow), the United Kingdom (WDC - Geomagnetism, Edinburgh), Denmark (WDC - Geomagnetism, Copenhagen), and India (WDC - Geomagnetism, Mumbai).

Figure 1 shows the number of observatories from which we keep data in analogue and digital forms.

Figure 1.

Optical recording on photo paper was originally used for most analogue recording.The digital recording of the data observed by modern electronic magnetometers started to increase from around 1980, and in 1992, finally overtook analogue recording. In 2000, the number of analogue stations had decreased to less than 10% of the total, and now all data are provided in digital form. In mid-1990s, the Internet and World Wide Web became popular, with WDC - Geomagnetism, Kyoto starting its web service in 1995.

The WDCs for Geomagnetism have been exchanging among themselves the data collected at each data centre for 60 years. During the analogue data era, it took money and manpower to collect data from distant observatories and copy them onto microfilms; and the big data centres such as WDC-A in Boulder (now World Data Service for Geophysics) or WDC-B in Moscow (now WDC - Solar-Terrestrial Physics, Moscow) mainly collected the data and distributed them to the other smaller data centres. After shifting to the digital data and Internet era, the situation changed. Collecting data via the Internet is much easier than collecting photo papers from distant stations, and international collaboration is also much easier than before.

Nowadays, more than half of geomagnetic data are provided through an international consortium, INTERMAGNET (WDS Network Member). The transition from analogue to digital recording thus also changed the main player in the provision of geomagnetic data services.

More »