Skip to main content


Dec 12

The American Geophysical Union (AGU), with greater than 62,000 members from more than 140 countries, advances the Earth and space sciences by catalyzing and supporting the efforts of individual scientist. AGU galvanizes a community of Earth and space scientists that collaboratively advances and communicates science and its power to ensure a sustainable future.

AGU will be holding its fall meeting next week, December 15-19 in San Francisco, CA. Nearly 24,000 attendees are expected to attend, making this the largest Earth and space science meeting in the world.  With more than 1700 sessions, the program offers a unique mix of more than 23,000 oral and poster presentations. 

Several members from the VIVO Community will be presenting their research, some of which demonstrates the central role of VIVO in their work.

UCAR (University Corporation for Atmospheric Research)

Matthew Mayernik and colleagues will be co-convening a poster and oral session which is taking place on Wednesday December 17, and will cover the topic of "Semantic Web and Provenance: Distributed Earth Science Resources in the Data Life Cycle". The presentations and posters will focus on the use of semantic web! and provenance technologies to better represent Earth science phenomena and to facilitate the discovery and use of Earth science information and data resources. There are several members from the VIVO Community who will be part of this session, and their work is described below.

The Tetherless World Constellation at Rensselaer Polytechnic Institute (RPI) 

A number of posters and oral presentations by Peter Fox, Patrick West, Stephan Zednik, Han Wang, Yu Chen, and others are among those persons on the roster of presenters from RPI. 

Deep Carbon Observatory (DCO)

Han Wang, et al. will present a poster entitled “DCO-VIVO: A Collaborative Data Platform for the Deep Carbon Science Communities”. VIVO plays an integrative role in the DCO project because thousands of DCO scientists from institutions across the globe are involved with cross-community and cross-disciplinary collaboration, a distinctive features in DCO's flexible research framework. An excerpt from Han’s abstract describes how VIVO is being used in the DCO project:Han Wang, et al. will present a poster entitled “DCO-VIVO: A Collaborative Data Platform for the Deep Carbon Science Communities”. VIVO plays an integrative role in the DCO project because thousands of DCO scientists from institutions across the globe are involved with cross-community and cross-disciplinary collaboration, a distinctive features in DCO's flexible research framework. An excerpt from Han’s abstract describes how VIVO is being used in the DCO project:

“The DCO-VIVO solution expedites research collaboration between DCO scientists and communities. Based on DCO's specific requirements, the DCO Data Science team developed a series of extensions to the VIVO platform including extending the VIVO information model, extended query over the semantic information within VIVO, integration with other open source! collaborative environments and data management systems, using single sign-on, assigning of unique Handles to DCO objects, and publication and dataset ingesting extensions using existing publication systems. We present here the iterative development of these requirements that are now in daily use by the DCO community of scientists for research reporting, information sharing, and resource discovery in support of research activities and program management.”

The Laboratory for Atmospheric an Space Physics (LASP)

Members of the VIVO Community at in Boulder, CO will also be at the AGU meeting. A poster describing LASP's work using VIVO to create a semantic database of metadata about LASP datasets will be presented. Anne Wilson, Michael Cox, Doug Lindholm, Irfan Nadiadi, and Tyler Traver and will describe LASP’s work and VIVO’s key role:

"The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP. The database is populated with information garnered via web forms and automated processes. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered to ensure that users get current, accurate information. It can also be pulled to create metadata records in various metadata formats.

The LEMR database has been implemented as a RDF triplestore, coupled with SPARQL over HTTP read access to enable semantic queries over the repository contents. To create the repository, the LASP team leveraged VIVO to manage and create new ontologies.. A variety of ontologies were used in creating the triplestore, including ontologies that come with VIVO, such as FOAF. Also, the W3C DCAT ontology! was integrated and extended to describe properties! of data products that need to be captured, such as spectral range. The LASP presentation will describe the architecture, ontology issues, and tools used to create LEMR and plans for its evolution." 

Interested in following AGU Fall 2014 happenings? Here’s the hashtag: #AGU14

Dec 12

15 VIVO Goals for 2015-2016 Are Identified and Selected by the VIVO Strategy Group are Presented for Comment




Last week, eleven members of the VIVO Strategy Group met on December 1 & 2 at Northwestern University's Evanston, IL campus to set priorities for the upcoming two years. Prior to the meeting, a 3-question survey was sent to 41 people who are members of the VIVO Leadership Group, the VIVO Steering Group, the VIVO Management Team, and the VIVO Strategy Group. 

Survey Questions:
  • What do you think the value proposition is for VIVO?  
  • What do you see as VIVO’s top goals in the next 2-3 years?
  • What do you think are the key issues and challenges for VIVO that need to be addressed in the next 2-3 years?

Twenty people answered the survey and provided a total of 181 bulleted responses. The answers were categorized into three strategic themes: Community; Sustainability; and Technology (see survey results and other meeting information here).

Key objectives of the VIVO Strategy Meeting:
  • develop a shared understanding of VIVO's value proposition; 
  • discuss VIVO goals & issues; 
  • prioritize VIVO goals; and 
  • develop high-level action plans for prioritized VIVO goals.
Working Value Proposition

A working value proposition was drafted for further refinement: VIVO provides an integrated, searchable, view of the academic work of an organization.

Five goals were selected for each strategic theme and are presented below. High level action plans are being finalized and will be communicated in the very near future.

Top 15 2015-2016 goals for VIVO categorized by Strategic Theme:
  1. Increase productivity of the VIVO project.
  2. Develop a more transparent governance operation with clear roles and responsibilities.  
  3. Increase the number of contributors and the work they do to support software development, ontology! development, and other activities in the community.
  4. Maintain a current and dynamic web presence.
  5. Develop key goals and activities for leveraging key partnerships outside of the VIVO community (e.g. ORCID, CRIS, CASRAI,W3C, SciEnCV, CRediT, etc.) and align with VIVO strategy.
  1. Create an inclusive and welcoming open source! community aligned with VIVO's mission.
  2. Develop a clear value proposition for VIVO that explains the benefits of VIVO, the data model and the ontology.
  3. Establish a road map.
  4. Increase adoption by growing installed base.
  5. Clearly identify and aggressively promote the value membership.
  1. Develop democratic code contribution (and ontology contribution) processes. 
  2. Clarify the Core VIVO architecture, including guidelines and processes for making contributions available.
  3. Develop VIVO search for cross-institutional and cross-platform use, thus expanding the capabilities to existing and future users.
  4. Improve/increase VIVO core modularity with plug and play reasoners, triple stores, and search engines, e.g. to allow 3rd party and user-developed functionality to be easily ntegrated.
  5. Institute a distributed, team-based development and release management process for all VIVO project work.

Your comments and strongly encouraged. Please send your feedback to Layne Johnson, VIVO Project Director at ljohnson at

Dec 1

This week a 15 member VIVO Strategy Team is meeting at the Northwestern University Library on the Evanston campus to review issues and set goals for the VIVO project.

Representatives include members of the VIVO Leadership Group, the VIVO Steering Group, the VIVO Management Team and others who serve on the Strategy Team. Meeting goals include:

  • Review and Agree Upon Shared Understanding VIVO's Value Proposition
  • Identify and Discuss Key VIVO Goals & Issues
  • Prioritize Key VIVO Goals & Issues
  • Develop High Level Action Plans for Prioritized VIVO Goals & Issues
  • Identify Key Means for Communicating and Vetting the Strategic Plan
  • Develop Next Steps
Priorities will be established along three Strategic Themes: Community, Sustainability, and Technology. Strategy survey results from 19 respondents will be reviewed and used to establish priorities. High level action plans will be created for five of the highest priority goals from each of the Strategic Theme categories. The strategic priorities and action plans will help the VIVO Community focus on key goals that need to be accomplished and contribute to building better roadmaps for the successful future of the VIVO project.
Nov 18

One particular VIVO project that demonstrates the spirit of open access principles is Yaffle. Many VIVO implementations provide value to their host institutions, ranging from front-end access to authoritative organizational information to highlights of works created in the social sciences and arts and humanities. Yaffle extends beyond its host institution and provides a cohesive link between Memorial University and citizens from Newfoundland and Labrador. The prospects for launching Yaffle in other parts of Canada will be realized in the near future. 

Yaffle Memorial UniversityYaffle Memorial University

Memorial University has a special and serious obligation to the people of the province, Newfoundland and Labrador.  Part of Memorial's commitment to fulfill that social mandate is to engage with public and community partners in research that solves real world issues.


To facilitate that exchange, Memorial University developed Yaffle, a web connection and engagement application that supports Knowledge Mobilization, the bi-directional creation and sharing of knowledge. This past year the Yaffle team at Memorial’s Harris Centre joined forces with the VIVO initiative and worked together to model Knowledge Mobilization. One result of that work is the Yaffle Knowledge Mobilization Ontology, an ontology! that was designed to fit within the VIVO-ISF framework. In addition to adding a semantic layer to the Yaffle technology stack, the Yaffle team developed a content management layer in Drupal and an API for writing content to VIVO.


Memorial University researchers and the public use Yaffle to highlight their diverse work, unique interests and valued expertise around the province and the world! In the Spring of 2015 Yaffle will launch two additional schools in the Atlantic provinces.


For more information visit or contact Lisa Charlong at


VIVO is a community sponsored project and is sustained by members like you. To become a member, please visit the website here.


Explore VIVO at and check out resources (including the activities of VIVO’s Working Groups) on the VIVO wiki. Track the VIVO blog and follow us onTwitter@VIVOcollab.

Nov 6

Winchester, MA  Being able to discover data and understand the connections among earth and atmospheric field experiments, research teams, datasets, research instruments, and published findings is a key objective of the U.S. National Science Foundation’s EarthCube Project program ( To reach this goal, linked and open data principles are being used to adapt the VIVO semantic web! platform so that it can be applied to large-scale field experiments involving many investigators from multiple institutions. The Project is being developed by a partnership of institutions: the NCAR/UCAR Library and the Earth Observing Laboratory within the National Center for Atmospheric Research/University Corporation for Atmospheric Research; Cornell University Library, and; UNAVCO, a non-profit university-governed consortium that facilitates geoscience research and education using Geodesy. The two-year project entitled “Enabling Scientific Collaboration and Discovery through Semantic Connections” is funded by EarthCube which supports transformative approaches to data management across the geosciences. 

VIVO is a tool that connects heterogeneous information through a single linked data point, like a person, to create a traceable path through a data system. Using VIVO, researchers can see what the connections are between organizations through many entry points. EarthCube will provide an operational example for how the VIVO networked-based data model can serve as a middle layer between technology and data stores that support large-scale, virtual organizations.

Mary Marlino, NCAR/UCAR Library Director, is excited about implementing a model for a large-scale VIVO-based linked and open data system with EarthCube. “International scientific research campaigns can involve hundreds of researchers from many different countries across many institutions. Research results are often “siloed” in a field’s journal instead of being shared widely across an entire campaign. Associated ethnographic and anthropological details are missed because data connections between aspects of the campaign do not exist. VIVO is interesting because it can map and connect the many facets of research campaigns together. This is a model for how science is conducted in the 21st century,” she said.

EarthCube will create a rich network of information, linking field experiments with particular datasets, authors, publications, and even research tools that result from, or are associated with each experiment. VIVO will use data from two sources: a recent interdisciplinary field program whose data archive is hosted by NCAR’s Earth Observing Laboratory (the Bering Sea Project), and a set of diverse research projects informed by geodetic tools, such as GPS networks and ground-based imaging, that are operated and maintained by UNAVCO.

“We envision supporting the geoscience community in terms of their real, on-the-ground data needs. EarthCube is building integrated geoscience and technology collaborations that will help evolve methods for research data collection, preservation and re-use across scientific disciplines,” explained Matthew Mayernik, who is the principal investigator on the EarthCube project and a research data services specialist in the NCAR/UCAR Library. “Making connections between different types of resources within and across data facilities will help researchers to discover and use the heterogeneous information collections that are emerging in all research fields.”

Initial data for this project will come from the NCAR Earth Observing Lab and UNAVCO data and metadata collections. Data about publications and people will come from other sources. The project also plans to develop sharing protocols for both pulling and pushing data across the VIVO network to take advantage of the growing VIVO community, reduce information duplication across instances, and allow users to see connections between people and organizations that exist across VIVO instances. 

For further information, please visit:


Oct 31

Brian Lowe walks the Implementation and Development Working Group through the finer details of Inferencing and Reinferencing at the October 30, 2014 Working Group Call.

If you've never had the chance to learn about this underlying technology and why it is so important for adding power to VIVO and other semantic-web based systems, a review of the Working Group Minutes will certainly pique your interest. You may access the topic at

Working Group minutes are rarely shared on the blog space, but this meeting was so educational that it warrants broad distribution.

Even though it's Halloween, don't let this material scare you! For further information, contact Brian Lowe at: or

submit your questions at or 


Oct 22

Boulder, CO Ten months ago, the Laboratory for Atmospheric and Space Physics (University of Colorado at Boulder) watched as the MAVEN spacecraft lifted off from Cape Canaveral Air Force Base on board an Atlas V launch vehicle. On board MAVEN is a scientific instrument (IUVS) built entirely by LASP, as well as other instruments that LASP aided in building. Ten months and 442 million miles later, MAVEN successfully completed a Martian orbital insertion maneuver and is now in a stable orbit around the red planet.

From Michael Cox, Laboratory of Atmospheric and Space Physics, University of Colorado at BoulderFrom Michael Cox, Laboratory of Atmospheric and Space Physics, University of Colorado at Boulder

These are four of the first images taken by the IUVS instrument. IUVS obtained these false-color images about eight hours after the successful completion of MAVEN’s Mars orbital insertion maneuver on September 21, 2014.

LASP is the home of the primary investigator on the MAVEN project, Bruce Jakoski; LASP is also serving as the science operations center and the science data center, as well as providing education and public outreach for the MAVEN mission. As such, LASP is highly interested in the management of the datasets that MAVEN will produce. LASP has already had a VIVO instance in place for some time that deals with metadata related to solar irradiance data collected by other Earth-orbiting missions, but we hope to expand the ontology! in the near future to be able to handle the atmospheric datasets that MAVEN will be creating.

As our VIVO instance stands today, LASP does store a fair deal of hardware infrastructure data: databases, filesystems, servers, directories and directory sizes, etc. Some of these infrastructure resources are directly related to MAVEN science data operations, providing information about where MAVEN databases live, how much space MAVEN directories are using, etc. VIVO currently serves as the authoritative source for a number of cost center type reports that aggregate and present this information to management.

However, our current dataset ontology (which is under heavy and active development) does not yet provide the capability to store MAVEN-type dataset metadata. We intend to work at developing a generic ‘Space-Based Ontology’ that could address the needs of any type of space-based scientific data. When development begins on this extended project, LASP certainly hopes to collaborate with VIVO ontologists and leverage the mutual work that is already underway in these areas.

Thanks to Michael Cox, LASP, for major contributions to this blog.

Oct 22

Thomson Reuters hosted a CONVERIS Global User Group Meeting for current and prospective users in Hatton Garden, London, on October 1-2, 2014.  About 40 attendees from the UK, Sweden, the Netherlands, European Institutions from other countries, and the University of Botswana met to discuss issues pertaining to Research Information Management Systems, the CONVERIS Roadmap, research analytics, and new features and functions being provided by CONVERIS ( VIVO Project Director Layne Johnson was on hand to highlight VIVO’s role in managing research information.

CONVERIS is a complete and integrated workflow solution integrating data and analytics to track the research process ranging from start to finish, including Pre- and Post-Award management, publications management, graduate student management, a research portal to share information over the web, analytics, and more.  Case studies were presented to showcase how CONVERIS is used to help drive research decision making; community initiatives, including ORCID and the use of unique identifiers to drive better research management, and CASRAI for supporting interoperability across the research lifecycle. “What’s Next for Research Information Management?” was highlighted in presentations from euroCRIS and VIVO.

Layne Johnson, Ph.D., VIVO Project Director presented a talk entitled “The Evolving Role of VIVO in Research and Scholarly Networks” which provided a brief description of VIVO, examples of VIVO implementations, future initiatives of the VIVO Project, key VIVO partnerships essential for effective research information management, and the role of DuraSpace and VIVO in building and promoting scholarly and research communities. VIVO was also represented at the Global User Group Meeting, along with ORCID, CASRAI, and euroCRIS during a panel discussion and question and answer period where attendees and panelists discussed the importance of consistent data standards and interoperability and the need for consistent data models across the research spectrum.

Breakout sessions were held on October 2nd and covered:

  • Standards Integration
  • Integration with InCites
  • Open Access; and
  • Costing and Pricing.

VIVO and CONVERIS continue to explore opportunities for partnership in an effort to enhance sharing information about researchers, research output (including publications and datasets) and data interoperability. VIVO and CONVERIS offer complementary capabilities to the research community, and ongoing discussions are being held to identify ways that the two platforms can be used synergistically to enhance access and sharing of research-related information.

Thomson Reuters is an Investor Level Corporate Sponsor of the VIVO project and participates in the governance of VIVO by serving on the VIVO Leadership Group to provide strategic and tactical input, assist in setting priorities for the VIVO Project, and identifying ways to promote the use of VIVO in the research community.

Oct 19

2104 Hackathon2104 HackathonWinchester, MA

The Fall 2014 VIVO Hackathon took place at Cornell University’s Mann Library in Ithaca, New York from Oct. 13-15. More than 30 enthusiastic developers from around the US and Canada in were in attendance. Fall colors, gorge hikes, trips to local eateries and opportunities for making friends around shared interests in leveraging the VIVO interdisciplinary semantic web! framework all contributed to a lively and successful event.

On Wednesday morning project teams participated in a closing session where accomplishments were shared with the group. “Show and tell” reports included slides and pointers to code contributions, issues being tracked, and new documentation. Slides and notes are available here at, including pointers to GitHub from projects producing code.
Here are a few highlights.

A data visualization hack team set out to identify and gather external authoritative data to help VIVO institutions understand their sources of grant funding. The aim was to populate a “bubble map” visualization that would highlight the location and level of support from organizations providing grant funding to universities using VIVO. It was no surprise that in the initial visualization of the map of the US many large bubbles appeared over Washington, DC, leading the team to look for more specific address information than can be found in existing data sources such as FundRef. Using federal tax id numbers, the team linked additional zip+4 and classification information to the FundRef data so that a registry accessible as linked data will make it easier for VIVO institutions to "follow the (grant) money".

The ontology! team tackled the challenges of sharing local ontology extensions to benefit other institutions and evolve VIVO ontology in concert. The team identified key requirements for a central ontology registry where VIVO installations could review available extensions and/or submit new proposed classes, properties!, or terms for review and possible adoption in subsequent VIVO-ISF ontology releases. The same platform could potentially also manage a registry of URIs for people, educational and funding organizations, journals, events, and vocabulary terms as persistent identifiers. By associating ORCID and/or VIAF records with registry entries wherever available, VIVO URIs from the registry could be linked to and/or added to Library of Congress and/or OCLC authority records used in library catalogs all over the world.

The principles of open data and linked data are central to the VIVO framework. Where and how to find the right authoritative data was a theme running through several hacks. Where do data come from? Where are data first generated? One team was concerned with how to include links to other, related things when publishing data on the web and proposed a simple version of a “VIVO requester” service that could use the registry proposed above to determine who already has a persistent URI for a funding organization or the right vocabulary of equipment or research resource types for a particular purpose.

Anyone who has worked in a wiki knows that it can become unmanageable over time with many people contributing entries without a clear structure or editorial review. Making the VIVO wiki less wacky was the subject of a hack that sought to address a commonly perceived lack of documentation by improving the structure and tagging existing content and making it more clear to potential contributors where new content belongs. The DSpace wiki space demonstrates one approach to wiki structure that separates technical documentation varying with each release from introductory and explanatory documentation. Highlighting fewer topical categories, renaming files for consistency and moving technical documents into space-specific versions were among improvements made to the VIVO wiki.

Faceted browsing to support dynamic filters on browse pages or search results requires adding facets to existing page templates. The faceted browse hack added a faceted search of people by research area, and a start was made on building a more general capability that can be configured on a per-site basis without having to modify code.

Embedding tags for SEO (search engine optimization) in VIVO requires only modest changes to page templates and will be an out-of-the-box capability in the upcoming release of VIVO v1.8. Google now builds its search results based on recognizing content and structure in tags embedded in the HTML on web pages. This hack adds these tags to VIVO’s display pages and will make VIVO data more visible in Google and other search engines while also providing additional descriptive information in search results to benefit users.

Max Hu from Memorial University in Newfoundland demonstrated a Drupal CMS +VIVO data site: YAFFLE (, where data developed and stored in VIVO is presented to users through Drupal pages providing additional entry points and help support.

Josh Hanna from the University of Florida worked on testing VIVO with the Stardog enterprise graph database ( to see whether a commercial triple store could provide better performance with increasing amounts of data. Leveraging the VIVO community to negotiate academic Stardog licenses was discussed. Weill Cornell is also testing VIVO on other triple store platforms.

Patrick West reviewed the Deep Carbon Observatory Data Portal for the Deep Carbon Observatory (, a project led by RPI and funded by the Sloan Foundation. A Drupal front end with VIVO integration provides a variety of visualizations; persistent URI handles for every dataset link users back to VIVO as the primary metadata store. This functionality required a code modification in VIVO core to return the persistent data handles from the data repository at the time of dataset upload.

Northwestern University has offered to host the next VIVO Hackathon in 2015. More information to come.

Oct 1

August 12 - 14, 2015 in Cambridge, MA

Hyatt Regency Cambridge

The VIVO conference provides a unique opportunity for people from across the country and around the world to come together in the spirit of promoting scholarly collaboration and research discovery. This fun and exciting city will be the perfect backdrop for the 2015 conference. Join us to gain insight into the latest industry trends and innovations while enjoying all of the history, food, and culture Cambridge has to offer!

Be on the lookout for the Call for Papers coming soon!

If you have any questions, please do not hesitate to contact us.

Designing Events