Publicação:
The storage system for a multimedia data manager kernel

dc.contributor.authorValěncio, Carlos Roberto [UNESP]
dc.contributor.authorDe Almeida, Fábio Renato [UNESP]
dc.contributor.authorMachado, Jose Marcio [UNESP]
dc.contributor.authorColombini, Angelo Cesar
dc.contributor.authorNeves, Leandro Alves [UNESP]
dc.contributor.authorDe Souza, Rogeria Cristiane Gratao [UNESP]
dc.contributor.institutionUniversidade Estadual Paulista (Unesp)
dc.contributor.institutionUniversidade Federal de São Carlos (UFSCar)
dc.date.accessioned2018-12-11T17:24:38Z
dc.date.available2018-12-11T17:24:38Z
dc.date.issued2014-01-01
dc.description.abstractOne way to boost the performance of a Database Management System (DBMS) is by fetching data in advance of their use, a technique known as prefetching. However, depending on the resource being used (file, disk partition, memory, etc.), the way prefetching is done might be different or even not necessary, forcing a DBMS to be aware of the underlying Storage System. In this paper we propose a Storage System that frees the DBMS of this task by exposing the database through a unique interface, no matter what kind of resource hosts it. We have implemented a file resource that recognizes and exploits sequential access patterns that emerge over time to prefetch adjacent blocks to the requested ones. Our approach is speculative because it considers past accesses, but it also considers hints from the upper layers of the DBMS, which must specify the access context in which a read operation takes place. The informed access context is then mapped to one of the available channels in the file resource, which is equipped with a set of internal buffers, one per channel, for the management of fetched and prefetched data. Prefetched data are moved to the main cache of the DBMS only if really requested by the application, which helps to avoid cache pollution. So, we slightly introduced a two level cache hierarchy without any intervention of the DBMS kernel. We ran the tests with different buffer settings and compared the results against the OBL policy, which showed that it is possible to get a read time up to two times faster in a highly concurrent environment without sacrificing the performance when the system is not under intensive workloads.en
dc.description.affiliationDepartamento de Ciéncias de Computação e Estatística, São Paulo State University - UNESP
dc.description.affiliationDepartamento de Computação, Federal University of São Carlos - UFSCar
dc.description.affiliationUnespDepartamento de Ciéncias de Computação e Estatística, São Paulo State University - UNESP
dc.format.extent219-225
dc.identifierhttp://dx.doi.org/10.1109/PDCAT.2013.41
dc.identifier.citationParallel and Distributed Computing, Applications and Technologies, PDCAT Proceedings, p. 219-225.
dc.identifier.doi10.1109/PDCAT.2013.41
dc.identifier.lattes4644812253875832
dc.identifier.lattes2139053814879312
dc.identifier.orcid0000-0002-9325-3159
dc.identifier.scopus2-s2.0-84907986006
dc.identifier.urihttp://hdl.handle.net/11449/177250
dc.language.isoeng
dc.relation.ispartofParallel and Distributed Computing, Applications and Technologies, PDCAT Proceedings
dc.rights.accessRightsAcesso aberto
dc.sourceScopus
dc.subjectAccess context
dc.subjectbuffer management
dc.subjectcache pollution
dc.subjectchannel
dc.subjectdatabase management system
dc.subjectmultimedia database
dc.subjectobject database
dc.subjectperformance
dc.subjectprefetching
dc.subjectsequential access pattern
dc.subjectstorage system
dc.titleThe storage system for a multimedia data manager kernelen
dc.typeTrabalho apresentado em evento
dspace.entity.typePublication
unesp.author.lattes2139053814879312
unesp.author.orcid4644812253875832[1]
unesp.author.orcid0000-0002-9325-3159[1]
unesp.campusUniversidade Estadual Paulista (Unesp), Instituto de Biociências Letras e Ciências Exatas, São José do Rio Pretopt
unesp.departmentCiências da Computação e Estatística - IBILCEpt

Arquivos