Read e-book E-Librarian Service: User-Friendly Semantic Search in Digital Libraries

Free download. Book file PDF easily for everyone and every device. You can download and read online E-Librarian Service: User-Friendly Semantic Search in Digital Libraries file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with E-Librarian Service: User-Friendly Semantic Search in Digital Libraries book. Happy reading E-Librarian Service: User-Friendly Semantic Search in Digital Libraries Bookeveryone. Download file Free Book PDF E-Librarian Service: User-Friendly Semantic Search in Digital Libraries at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF E-Librarian Service: User-Friendly Semantic Search in Digital Libraries Pocket Guide.

Wilson and m. Abstract: Digital libraries are concerned with improving the access to collections to make their service more effective and valuable to users. In this paper, we present the results of a four-week longitudinal study investigating the use of both exploratory and keyword forms of search within an online video archive, where both forms of search were available concurrently in a single user interface.

While we expected early use to be more exploratory and subsequent use to be directed, over the whole period there was a balance of exploratory and keyword searches and they were often used together. Further, to support the notion that facets support exploration, there were more than five times as many facet clicks than more complex forms of keyword search boolean and advanced. From these results, we can conclude that there is real value in investing in exploratory search support, which was shown to be both popular and useful for extended use of the system.

Abstract: The growing availability of online K curriculum is increasing the need for meaningful alignment of this curriculum with state-specific standards. Promising automated and semi-automated alignment tools have recently become available. Unfortunately, recent alignment evaluation studies report low inter-rater reliability, e.

While these results are in line with studies in other domains, low reliability makes it difficult to accurately train automatic systems and complicates comparison of different services. Hence, we suggest decomposing these concepts into less abstract, more precise measures anchored in the daily practice of teaching. From NSDL 1.

  • E-librarian service: User-friendly semantic search in digital libraries?
  • Living and Working in Space: The NASA History of Skylab!
  • Neuropsychology for Psychologists, Health Care Professionals, and Attorneys, Third Edition.
  • E-Librarian Service.

As a mature program, NSDL has reached a point where it could either change direction or wind down. In this paper we argue there are reasons to continue the program and we outline several possible new program directions. Craig Carter and Colin Ashe. Abstract: This paper discusses a digital library designed to help undergraduate students draw connections across disciplines, beginning with introductory discipline-specific science courses including chemistry, materials science, and biophysics.

The collection serves as the basis for a design experiment for interdisciplinary educational libraries and is discussed in terms of the three models proposed by Sumner and Marlino. As a cognitive tool, the library is organized around recurring patterns in molecular science, with one such pattern being developed for this initial design experiment.

As a component repository, the library resources support learning of these patterns and how they appear in different disciplines. As a knowledge network, the library integrates design with use and assessment. Abstract: This paper describes the design and implementation of a curriculum overlay model for the representation of adaptable curriculum using educational digital library resources. We focus on representing curriculum to enable the incorporation of digital resources into curriculum and curriculum sharing and customization by educators.

Google Books

We defined this model as a result of longitudinal studies on educators' development and customization of curriculum and user interface design studies of prototypes representing curriculum. Like overlay journals or the information network overlay model, our curriculum overlay model defines curriculum as a compound object with internal semantic relationships and relationships to digital library metadata describing resources.

Overview of the digital libraries

We validated this model by instantiating the model using science curriculum which uses digital library resources and using this instantiation within an application which, built on FEDORA, supports curriculum customization. Findings from this work can support the design of digital library services for customizing curriculum which embeds digital resources.

Abstract: Geolocalized databases are becoming necessary in a wide variety of application domains. Thus far, the creation of such databases has been a costly, manual process. This drawback has stimulated interest in automating their construction, for example, by mining geographical information from the Web. Here we present and evaluate a new automated technique for creating and enriching a geographical gazetteer, called Gazetiki.

Our technique merges disparate information from Wikipedia, Panoramio, and web search engines in order to identify geographical names, categorize these names, find their geographical coordinates and rank them. We show that our method provides a richer structure and an improved coverage compared to the other known attempt at automatically building a geographic database, TagMaps. The information produced in Gazetiki enhances and complements the Geonames database, using a similar domain model. Abstract: In this paper, we consider the problem of discovering GIS data sources on the web.

Source discovery queries for GIS data are specified using keywords and a region of interest. A source is considered relevant if it contains data that matches the keywords in the specified region. Existing techniques simply rely on textual metadata accompanying such datasets to compute relevance to user-queries. Such approaches result in poor search results, often missing the most relevant sources on the web. We address this problem by developing more meaningful summaries of GIS datasets that preserve the spatial distribution of keywords.

We conduct experiments showing the effectiveness of proposed summarization techniques by significantly improving the quality of query results over previous approaches, while guaranteeing scalability and high performance. Abstract: Web 2. In this paper, we identify a number of vulnerabilities inherent in online communities and study opportunities for malicious participants to exploit the tight social fabric of these networks. With these problems in mind, we propose the SocialTrust framework for tamper-resilient trust establishment in online communities.

Two of the salient features of SocialTrust are its dynamic revision of trust by i distinguishing relationship quality from trust; and ii incorporating a personalized feedback mechanism for adapting as the community evolves. We experimentally evaluate the SocialTrust framework using real online social networking data consisting of millions of MySpace profiles and relationships. We find that SocialTrust supports robust trust establishment even in the presence of large-scale collusion by malicious participants. Abstract: Digital objects require appropriate measures for digital preservation to ensure that they can be accessed and used in the near and far future.

While heritage institutions have been addressing the challenges posed by digital preservation needs for some time, private users and SMEs are way less prepared to handle these challenges. Yet, both have increasing amounts of data that represent considerable value, be it office documents or family photographs. Backup, common practice of home users, avoids the physical loss of data, but it does not prevent the loss of the ability to render and use the data in the long term.

Research and development in the area of digital preservation is driven by memory institutions and large businesses. The available tools, services and models are developed to meet the demands of these professional settings. Abstract: Our previous research has shown that the collective behavior of search engine caches e. Interacting with these caches and archives, which we call the Web Infrastructure WI , allows entire websites to be reconstructed in an approach we call lazy preservation. Unfortunately, the WI only captures the client-side view of a web resource.

While this may be useful for recovering much of the content of a website, it is not helpful for restoring the scripts, web server configuration, databases, and other server-side components responsible for the construction of the web resource. In this paper we describe an archive architecture that provides a minimal approach to the long-term preservation of digital objects based on co-archiving of object semantics, uniform representation of objects and semantics, explicit storage of all objects and semantics as files, and abstraction of the underlying storage system.

This architecture ensures that digital objects can be easily migrated from archive to archive over time and that the objects can, in principle, be made usable again at any point in the future; its primary benefit is that it serves as a fallback strategy against, and as a foundation for, more sophisticated and costly preservation strategies. We describe an implementation of this architecture in a protoype archive running at UCSB that also incorporates a suite of ingest and access components. Abstract: Collaborative, social tagging and annotation systems have exploded on the Internet as part of the Web 2.

Systems such as Flickr, Del. Although social tagging sites provide simple, user-relevant tags, there are issues associated with the quality of the metadata and the scalability compared with conventional indexing systems.

Semantic Research for Digital Libraries

In this paper we propose a hybrid approach that enables authoritative metadata generated by traditional cataloguing methods to be merged with community annotations and tags. The harvested annotations are aggregated with the authoritative metadata in a centralized metadata store. This streamlined, interoperable, scalable approach enables libraries, archives and repositories to leverage community enthusiasm for tagging and annotation, augment their metadata and enhance their discovery services.

This paper describes the HarvANA system and its evaluation through a collaborative testbed with the National Library of Australia using architectural images from PictureAustralia. Abstract: In this paper we present a system called paperBase that aids users in entering metadata for preprints.

Customer Reviews

PaperBase extracts metadata from the preprint. PaperBase also predicts likely keywords for the preprints, based on a controlled vocabulary of keywords that the archive uses and a Bayesian classifier. Wang and C. Lee Giles. Abstract: Large scale digitization projects have been conducted at digital libraries with advancement in automatic document processing and popularity of digital libraries.

Design and implementation of a social semantic digital library

Scientific literature originally printed on paper have been converted into collections of digital resources for preservation and open access purposes. In this work, we tackle the problem of extracting structural and descriptive metadata for scanned volumes of journal. These metadata information illustrate the internal structure of a scanned volume, link objects in different sources, and describe published articles within a scanned volume. These structural and descriptive information is critical for digital libraries to provide effective content access functionalities to users. We proposed methods for generating volume level, issue level, and article level metadata using format and text features extracted from OCRed text.

We have developed the system and integrated it into an operational digital library for real world usage. Schilit and Okan Kolak. Abstract: Key Ideas is a technique for exploring digital libraries by navigating passages that repeat across multiple books. From these popular passages emerge quotations that authors have copied from book to book because they capture an idea particularly well: Jefferson on liberty; Stanton on women's rights; and Gibson on cyberpunk.

  • User-Friendly Semantic Search in Digital Libraries.
  • E-Librarian Service - User-Friendly Semantic Search in Digital Libraries.
  • SWIB18 - Semantic Web in Libraries | Speakers?
  • E-Librarian Service?
  • Federal Contracting Made Easy, 3rd Edition.
  • The Critical Path and Other Writings on Critical Theory, 1963-1975 (Collected Works of Northrop Frye)?

We augment Popular Passages by extracting key terms from the surrounding context and computing sets of related key terms. We then create an interaction model where readers fluidly explore the library by viewing popular quotations on a particular key term, and follow links to quotations on related key terms. In this paper we describe our vision and motivation for Key Ideas, present an implementation running over a massive, real-world digital library consisting of over a million scanned books, and describe some of the technical and design challenges. The principal contribution of this paper is the interaction model and prototype system for browsing digital libraries of books using key terms extracted from the aggregate context of popularly quoted passages.

Abstract: We report on the user requirements study and preliminary implementation phases in creating a digital library that indexes and retrieves educational materials on math. We first review the current approaches and resources for math retrieval, then report on the interviews of small group of potential users properly ascertain their needs. While preliminary, the results suggest that Meta-Search and Resource Categorization are two basic requirements for a math search engine. In addition, we implement a prototype categorization system and show that the generic features work well in identifying the math contents from the webpage but are weak in categorizing them.

We believe this is mainly due to the training data and the segmentation. In near future, we plan to improve it further while integrating it and Meta-Search into a search engine. As a long-term goal, we will also look into how math expressions and text may be best handled. Abstract: Most information workers query digital libraries many times a day. Yet people have little opportunity to hone their skills in a controlled environment, or compare their performance with others in an objective way. This paper describes an environment for exploratory query expansion that pits users against each other and lets them compete, and practice, in their own time and on their own workstation.

The system captures query evolution behavior on predetermined information-seeking tasks. It is publicly available, and the code is open source so that others can set up their own competitive environments. Abstract: At present very little is known about how people locate and view videos.

This study draws a rich picture of everyday video seeking strategies and video information needs, based on an ethnographic study of New Zealand university students. Abstract: Digital curators are faced with decisions about what part of the ever-growing, ever-evolving space of digital information to collect and preserve.

The recent explosion of web video on sites such as YouTube presents curators with an even greater challenge — how to sort through and filter a large amount of information to find, assess and ultimately preserve important, relevant, and interesting video. In this paper, we describe research conducted to help inform digital curation of on-line video. Since May , we have been monitoring the results of 57 queries on YouTube related to the U. Most open-access scholarly digital libraries crawl periodically a list of seed URLs in order to obtain appropriate collections of freely-available research papers.

The metadata of the crawled papers, e. The venue of publication is another important aspect about a scientific paper, which reflects its authoritativeness. However, the venue is not always readily available for a paper. Instead, it needs to be extracted from the references lists of other papers that cite the target paper, resulting in a difficult process.

In this paper, we explore a supervised learning approach to classifying the venue of a research paper by leveraging information solely available from the content of the paper. We show experimentally on a dataset of approximately 44, papers that this approach outperforms several baselines on venue classification. With an increasing amount of information on globally important events, there is a growing demand for efficient analytics of multilingual event-centric information.

Such analytics is particularly challenging due to the large amount of content, the event dynamics and the language barrier. Although memory institutions increasingly collect event-centric Web content in different languages, very little is known about the strategies of researchers who conduct analytics of such content. We discuss the influence factors for these strategies, the findings enabled by the adopted methods along with the current limitations and provide recommendations for services supporting researchers in cross-lingual event-centric analytics.

Library digitalization projects almost always use a page-driven file format for the description of manuscript transcriptions. This article shows how the TEITOK corpus framework provides a two-stage approach, dealing first with transcription in a page-driven manner, and afterwards converting losslessly to a text-driven format, leading to a fully searchable corpus closely linked to the manuscript images.

This paper asks to what extent querying, clicking, and text editing behavior can predict the usefulness of the search results retrieved during essay writing. By demonstrating that rather simple models can predict retrieval success, our study constitutes a first step towards incorporating usefulness signals in retrieval personalization for general writing tasks, presuming our results generalize. Search sessions consist of multiple user-system interactions. As a user-oriented measure for the difficulty of a session, we regard the time needed for finding the next relevant document TTR.

In this study, we analyse the search log of an academic search engine, focusing on the user interaction data without regarding the actual content. After observing a user for a short time, we predict the TTR for the remainder of the session. In addition to standard machine learning methods for numeric prediction, we investigate a new approach based on an ensemble of Markov models. Both types of methods yield similar performance.

However, when we personalise the Markov models by adapting their parameters to the current user, this leads to significant improvements. Digital cultural heritage DCH institutions are experiencing transitory visitation patterns to their online collections through traditional search interfaces. Generous interfaces have been lauded as a replacement to traditional search, yet their effects on user engagement are relatively unexplored. This paper presents the results of an online experiment with 3 prolific DCH generous interfaces, which aimed to quantify the effects of component use on user engagement.

The results highlight that despite no significant difference in focused attention levels, novel generous interface components promote engagement factors. Participants that make more use of components were found to be more likely to experience user engagement. Additionally, the generous interfaces were found to promote serendipitous discovery of collection items and support casual museum users despite low familiarity levels with the interfaces. The success of the tested generous interfaces is contingent upon the representation of the collection items, and how interesting they are to participants on initial view.

Peer review and citation data in predicting university rankings, a large-scale analysis. Most Performance-based Research Funding Systems PRFS draw on peer review and bibliometric indicators, two different methodologies which are sometimes combined. A common argument against the use of indicators in such research evaluation exercises is their low correlation at the article level with peer review judgments.

In this study, we analyse , papers from higher education institutes which were peer reviewed in a national research evaluation exercise. We combine these data with 6.

Other Links

We show that when citation-based indicators are applied at the institutional or departmental level, rather than at the level of individual papers, surprisingly large correlations with peer review judgments can be observed, up to r New learning resources are created and minted in Massive Open Online Courses every week — new videos, quizzes, assessments and discussion threads are deployed and interacted with in the era of on-demand online learning.

However, these resources are often artificially siloed between platforms and artificial web application models. This paper summarizes the results of a comprehensive statistical analysis on a corpus of open access articles and contained figures. It gives an insight into quantitative relationships between illustrations or types of illustrations, caption lengths, subjects, publishers, author affiliations, article citations and others. The multimedia content in the World Wide Web is rapidly growing and contains valuable information for many applications in different domains.

For this reason, the Internet Archive initiative has been gathering billions of time-versioned web pages since the mid-nineties. However, the huge amount of data is rarely labeled with appropriate metadata and automatic approaches are required to enable semantic search.

Normally, the textual content of the Internet Archive is used to extract entities and their possible relations across domains such as politics and entertainment, whereas image and video content is usually neglected. In this paper, we introduce a system for person recognition in image content of the Internet Archive. Thus, the system complements entity recognition in text and allows researchers and analysts to track media coverage and relations of persons more precisely. Based on a deep learning face recognition approach, we suggest a system that automatically detects persons of interest and gathers sample material, which is subsequently used to identify them in the image data of the Internet Archive.

We evaluate the performance of the face recognition system on an appropriate standard benchmark dataset and demonstrate the feasibility of the approach with some use cases. Extraction of information from a research article, association with other sources and inference of new knowledge is a challenging task that has not yet been entirely addressed. We present Research Spotlight, a system that leverages existing information from DBpedia, retrieves articles from repositories, extracts and interrelates various kinds of named and non-named entities by exploiting article metadata, the structure of text as well as syntactic, lexical and semantic constraints, and populates a knowledge base in the form of RDF triples.

An ontology designed to represent scholarly practices is driving the whole process. The system is evaluated through two experiments that measure the overall accuracy in terms of token- and entity- based precision, recall and F1 scores, as well as entity boundary detection, with promising results. In this paper, we promote the idea of automatic semantic characterization of scientific claims to explore entity-entity relationships in Digital collections; our proposed approach aims at alleviating time-consuming analysis of query results when the information need is not just one document but an overview over a set of documents.

We demonstrate the effectiveness of our method regarding quality by a practical evaluation using a real-world document collection from the medical domain to show the potential of our approach. In this paper, we propose a system for automatic segmentation and semantic annotation of verbose queries with predefined metadata fields. The problem of generating optimal segmentation has been modeled as a simulated annealing problem with proposed solution cost function and neighborhood function.

The annotation problem has been modeled as a sequence labeling problem and has been implemented with Hidden Markov Model HMM. Component-wise and holistic evaluation of the system have been performed using gold standard annotation developed over query log collected from National Digital Library NDLI. Our goal in this work is on providing open source software recommendations using the Github API. To demonstrate our approach, we implement a proof of concept prototype that provides software recommendations.

The amount of available videos in the Web has significantly increased not only for the purpose of entertainment etc. Order by , and we can deliver your NextDay items by. In your cart, save the other item s for later in order to get NextDay delivery. We moved your item s to Saved for Later. There was a problem with saving your item s for later.

You can go to cart and save for later there. Average rating: 0 out of 5 stars, based on 0 reviews Write a review. Serge Linckels. Tell us if something is incorrect. Add to Cart. Free delivery. Arrives by Wednesday, Oct 2. Or get it by Tue, Sep 24 with faster delivery.

Pickup not available. Product Highlights E-Librarian Service.