Logotipo del repositorio
  • English
  • Español
  • Iniciar sesión
    ¿Nuevo Usuario? Pulse aquí para registrarse¿Has olvidado tu contraseña?
Logotipo del repositorio
  • Comunidades
  • Todo RI
  • English
  • Español
  • Iniciar sesión
    ¿Nuevo Usuario? Pulse aquí para registrarse¿Has olvidado tu contraseña?
  1. Inicio
  2. Examinar por materia

Examinando por Materia "CALIDAD DE DATOS"

Mostrando1 - 7 de 7
Resultados por página
Opciones de clasificación
  • Artículo de Publicación Periódica
    Analyzing the quality of Twitter data streams
    (2022) Arolfo, Franco; Cortés Rodriguez, Kevin; Vaisman, Alejandro Ariel
    "There is a general belief that the quality of Twitter data streams is generally low and unpredictable, making, in some way, unreliable to take decisions based on such data. The work presented here addresses this problem from a Data Quality (DQ) perspective, adapting the traditional methods used in relational databases, based on quality dimensions and metrics, to capture the characteristics of Twitter data streams in particular, and of Big Data in a more general sense. Therefore, as a first contribution, this paper re-defines the classic DQ dimensions and metrics for the scenario under study. Second, the paper introduces a software tool that allows capturing Twitter data streams in real time, computing their DQ and displaying the results through a wide variety of graphics. As a third contribution of this paper, using the aforementioned machinery, a thorough analysis of the DQ of Twitter streams is performed, based on four dimensions: Readability, Completeness, Usefulness, and Trustworthiness. These dimensions are studied for several different cases, namely unfiltered data streams, data streams filtered using a collection of keywords, and classifying tweets referring to different topics, studying the DQ for each topic. Further, although it is well known that the number of geolocalized tweets is very low, the paper studies the DQ of tweets with respect to the place from where they are posted. Last but not least, the tool allows changing the weights of each quality dimension considered in the computation of the overall data quality of a tweet. This allows defining weights that fit different analysis contexts and/or different user profiles. Interestingly, this study reveals that the quality of Twitter streams is higher than what would have been expected."
  • Artículo de Publicación Periódica
    Analyzing the quality of Twitter data streams
    (2020) Arolfo, Franco A.; Cortes Rodriguez, Kevin; Vaisman, Alejandro Ariel
    "There is a general belief that the quality of Twitter data streams is generally low and unpredictable, making, in some way, unreliable to take decisions based on such data. The work presented here addresses this problem from a Data Quality (DQ) perspective, adapting the traditional methods used in relational databases, based on quality dimensions and metrics, to capture the characteristics of Twitter data streams in particular, and of Big Data in a more general sense. Therefore, as a first contribution, this paper re-defines the classic DQ dimensions and metrics for the scenario under study. Second, the paper introduces a software tool that allows capturing Twitter data streams in real time, computing their DQ and displaying the results through a wide variety of graphics. As a third contribution of this paper, using the aforementioned machinery, a thorough analysis of the DQ of Twitter streams is performed, based on four dimensions: Readability, Completeness, Usefulness, and Trustworthiness. These dimensions are studied for several different cases, namely unfiltered data streams, data streams filtered using a collection of keywords, and classifying tweets referring to different topics, studying the DQ for each topic. Further, although it is well known that the number of geolocalized tweets is very low, the paper studies the DQ of tweets with respect to the place from where they are posted. Last but not least, the tool allows changing the weights of each quality dimension considered in the computation of the overall data quality of a tweet. This allows defining weights that fit different analysis contexts and/or different user profiles. Interestingly, this study reveals that the quality of Twitter streams is higher than what would have been expected."
  • Proyecto final de Grado
    Calidad de datos contextual en Big Data: calidad de datos de Twitter
    (2020-04-24) Cortés Rodríguez, Kevin Imanol; Vaisman, Alejandro Ariel
    "En cada una de las fases del análisis en los procesos relacionados a Big Data, la calidad de datos juega un papel importante. La obtención de la calidad de datos, basados en las dimensiones de la calidad y métricas, deben ser adaptados en pos de capturar las nuevas características que el Big Data nos afronta. Este documento trata de profundizar dicho problema, redefiniendo las dimensiones y métricas de la calidad de datos en un escenario de Big Data, donde el dato llega en tiempo real en formato JSON y es procesado por distintos componentes para obtener métricas de calidad de datos. En particular, este proyecto estudia el caso concreto de mensajes de usuarios de la red social Twitter. Por otra parte, también se detalla la implementación de una nueva arquitectura continuando el proyecto de Data quality in a big data context: about Twitter’s data quality basada en microservicios, desde el momento que se procesa un tweet, llega desde la interfaz al usuario y todas las mejoras agregadas en pos de mejorar la experiencia al usuario."
  • Proyecto final de Grado
    Calidad de datos en Linked Data
    (2018) Alderete, Facundo; de la Puerta Echeverría, María; Romarión, Germán Rodrigo; Vaisman, Alejandro Ariel
  • Trabajo final de especialización
    Calidad de datos y aprendizaje automático: detección de errores semánticos en datos estructurados con esquema desconocido
    (2021-11) Lentini, Alejandro Daniel; Soliani, Valeria
    "El presente trabajo tiene como objetivo general evaluar si técnicas del aprendizaje automático provenientes del área del procesamiento natural del lenguaje pueden tener aplicación práctica en la detección semiautomática de errores semánticos en datos estructurados multivariados con calidad y esquema de datos desconocidos, ofreciendo lineamientos para el desarrollo de herramientas que asistan a los usuarios en estas tareas."
  • Ponencia en Congreso
    Data quality in a big data context
    (2018) Arolfo, Franco A.; Vaisman, Alejandro Ariel
    "In each of the phases of a Big Data analysis process, data quality (DQ) plays a key role. Given the particular characteristics of the data at hand, the traditional DQ methods used for relational databases, based on quality dimensions and metrics, must be adapted and extended, in order to capture the new characteristics that Big Data introduces. This paper dives into this problem, re-defining the DQ dimensions and metrics for a Big Data scenario, where data may arrive, for example, as unstructured documents in real time. This general scenario is instantiated to study the concrete case of Twitter feeds. Further, the paper also describes the implementation of a system that acquires tweets in real time, and computes the quality of each tweet, applying the quality metrics that are defined formally in the paper. The implementation includes a web user interface that allows filtering the tweets for example by keywords, and visualizing the quality of a data stream in many different ways. Experiments are performed and their results discussed."
  • Proyecto final de Grado
    Data quality in a big data context: about Twitter’s data quality
    (2018) Arolfo, Franco A.; Vaisman, Alejandro Ariel
    "In each of the phases of a Big Data analysis process, Data Quality (DQ) plays a key role. Given the particular characteristics of the data at hand, the traditional DQ methods, based on quality dimensions and metrics, must be adapted and extended, in order to capture the new characteristics that Big Data introduces. This paper dives into this problem, re-defining the DQ dimensions and metrics for a Big Data scenario, where the data arrives, in this particular case, as unstructured documents in real time, such as JSON objects. This general scenario is instantiated to study the concrete case of Twitter feeds. Further, the paper also describes the implementation of a system that acquires tweets in real time, and computes the quality of each tweet, applying the quality metrics that are defined formally in the paper. The implementation includes a web user interface that allows filtering the tweets, for example, by keywords, and visualizing the quality of a data stream in many different ways. Experiments are performed and their results discussed."

Licencia Atribución-NoComercial-CompartirIgual 4.0 Internacional (CC BY-NC-SA 4.0)

  • Configuración de cookies
  • Política Institucional de Acceso Abierto
  • Ley 26.899
  • Guía de depósito
  • Enviar Sugerencias
  • Contacto