Artículos de publicaciones periódicas
Permanent URI for this collection
Browse
Browsing Artículos de publicaciones periódicas by Issue Date
Now showing 1 - 20 of 42
Results Per Page
Sort Options
artículo de publicación periódica.listelement.badge An inference engine based on fuzzy logic for uncertain and imprecise expert reasoning(2002-07) D'Aquila, Raimundo; Crespo Crespo, Cecilia; Mate, J. L.; Pazos, J."This paper addresses the development and computational implementation of an inference engine based on a full fuzzy logic, excluding only imprecise quantifiers, for handling uncertainty and imprecision in rule-based expert systems. The logical model exploits some connectives of Lukasiewicz’s infinite multi-valued logic and is mainly founded on the work of L.A. Zadeh and J.F. Baldwin. As it is oriented to expert systems, the inference engine was developed to be as knowledge domain independent as possible, while having satisfactory computational efficiency (...)."artículo de publicación periódica.listelement.badge Foundations and applications for secure triggers(2006-02) Futoransky, Ariel; Kargieman, Emiliano; Sarraute, Carlos; Waissbein, Ariel"Imagine there is certain content we want to maintain private until some particular event occurs, when we want to have it automatically disclosed. Suppose furthermore, that we want this done in a (possibly) malicious host. Say, the confidential content is a piece of code belonging to a computer program that should remain ciphered and then “be triggered” (i.e., deciphered and executed) when the underlying system satisfies a preselected condition which must remain secret after code inspection. In this work we present different solutions for problems of this sort, using different “declassification” criteria, based on a primitive we call secure triggers. We establish the notion of secure triggers in the universally-composable security framework of [Canetti 2001] and introduce several examples. Our examples demonstrate that a new sort of obfuscation is possible. Finally, we motivate its use with applications in realistic scenarios."artículo de publicación periódica.listelement.badge Piet: a GIS-OLAP implementation(2007) Vaisman, Alejandro Ariel; Gómez, Leticia Irene; Kuijpers, Bart; Escribano, Ariel"Data aggregation in Geographic Information Systems (GIS) is a desirable feature, although only marginally present in commercial systems, which also fail to provide integration between GIS and OLAP (On Line Analytical Processing). With this in mind, we have developed Piet, a system that makes use of a novel query processing technique: first, a process called sub-polygonization decomposes each thematic layer in a GIS, into open convex polygons; then, another process computes and stores in a database the overlay of those layers for later use by a query processor. We describe the implementation of Piet, and provide experimental evidence that overlay precomputation can outperform GIS systems that employ indexing schemes based on R-trees."artículo de publicación periódica.listelement.badge A data model and query language for spatio-temporal decision support(2010) Gómez, Leticia Irene; Kuijpers, Bart; Vaisman, Alejandro Ariel"In recent years, applications aimed at exploring and analyzing spatial data have emerged, powered by the increasing need of software that integrates Geographic Information Systems(GIS) and On-Line Analytical Processing (OLAP). These applications have been called SOLAP (Spatial OLAP). In previous work, the authors have introduced Piet, a system based on a formal data model that integrates in a single framework GIS, OLAP (On-Line Analytical Processing), and Moving Object data. Real-world problems are inherently spatio-temporal. Thus, in this paper we present a data model that extends Piet, allowing tracking the history of spatial data in the GIS layers. We present a formal study of the two typical ways of intro ducing time into Piet: timestamping the thematic layers in the GIS, and timestamping the spatial objects in each layer. We denote these strategies snapshot-based and timestamp-based representations, respectively, following well-known terminology borrowed from temporal databases. We present and discuss the formal model for both alternatives. Based on the timestamp-based representation, we introduce a formal First-Order spatio-temporal query language, which we denote Lt, able to express spatio-temporal queries over GIS, OLAP, and trajectory data. Finally, we discuss implementation issues, the update operators that must be supported by the model, and sketch a temporal extension to Piet-QL, the SQL-like query language that supports Piet."artículo de publicación periódica.listelement.badge A simple linearization of the self-shrinking generator by means of cellular automata(2010) Fúster-Sabater, Amparo; Pazo-Robles, María Eugenia; Caballero-Gil, Pino"In this work, it is shown that the output sequence of a well-known cryptographic generator, the so-called self-shrinking generator, can be obtained from a simple linear model based on cellular automata. In fact, such a cellular model is a linear version of a nonlinear keystream generator currently used in stream ciphers. The linearization procedure is immediate and is based on the concatenation of a basic structure. The obtained cellular automata can be easily implemented with FPGA logic. Linearity and symmetry properties in such automata can be advantageously exploited for the analysis and/or cryptanalysis of this particular type of sequence generator."artículo de publicación periódica.listelement.badge An algebra for OLAP(2017) Kuijpers, Bart; Vaisman, Alejandro Ariel"Online Analytical Processing (OLAP) comprises tools and algorithms that allow querying multidimensional databases. It is based on the multidimensional model, where data can be seen as a cube, where each cell contains one or more measures can be aggregated along dimensions. Despite the extensive corpus of work in the field, a standard language for OLAP is still needed, since there is no well-defined, accepted semantics, for many of the usual OLAP operations. In this paper, we address this problem, and present a set of operations for manipulating a data cube. We clearly define the semantics of these operations, and prove that they can be composed, yielding a language powerful enough to express complex OLAP queries. We express these operations as a sequence of atomic transformations over a fixed multidimensional matrix, whose cells contain a sequence of measures. Each atomic transformation produces a new measure. When a sequence of transformations defines an OLAP operation, a flag is produced indicating which cells must be considered as input for the next operation. In this way, an elegant algebra is defined. Our main contribution, with respect to other similar efforts in the field is that, for the first time, a formal proof of the correctness of the operations is given, thus providing a clear semantics for them. We believe the present work will serve as a basis to build more solid practical tools for data analysis."artículo de publicación periódica.listelement.badge Characterization of electric load with information theory quantifiers(2017-01) Aquino, Andre L. L.; Ramos, Heitor S.; Frery, Alejandro C.; Viana, Leonardo P.; Cavalcante, Tamer S. G.; Rosso, Osvaldo A."This paper presents a study of the electric load behavior based on the Causality Complexity–Entropy Plane.We use a public data set, namely REDD, which contains detailed power usage information from several domestic appliances. In our characterization, we use the available power data of the circuit/devices of all houses. The Bandt–Pompe methodology combined with the Causality Complexity–Entropy Plane was used to identify and characterize regimes and behaviors over these data. The results showed that this characterization provides a useful insight into the underlying dynamics that govern the electric load."artículo de publicación periódica.listelement.badge The geodesic distance between 𝒢I0 models and its application to region discrimination(2017-03) Naranjo-Torres, José; Gambini, Juliana; Frery, Alejandro C."The 𝒢I0 distribution is able to characterize different regions in monopolarized SAR imagery. It is indexed by three parameters: the number of looks (which can be estimated in the whole image), a scale parameter, and a texture parameter. This paper presents a new proposal for feature extraction and region discrimination in SAR imagery, using the geodesic distance as a measure of dissimilarity between 𝒢I0 models. We derive geodesic distances between models that describe several practical situations, assuming the number of looks known, for same and different texture and for same and different scale. We then apply this new tool to the problems of identifying edges between regions with different texture, and quantify the dissimilarity between pairs of samples in actual SAR data. We analyze the advantages of using the geodesic distance when compared to stochastic distances."artículo de publicación periódica.listelement.badge Efficient analytical queries on semantic web data cubes(2017-12) Etcheverry, Lorena; Vaisman, Alejandro Ariel"The amount of multidimensional data published on the semantic web (SW) is constantly increasing, due to initiatives such as Open Data and Open Government Data, among other ones. Models, languages, and tools, that allow obtaining valuable information e ciently, are thus required. Multidimensional data are typically represented as data cubes, and exploited using Online Analytical Processing (OLAP) techniques. The RDF Data Cube Vocabulary, also denoted QB, is the current W3C standard to represent statistical data on the SW. Given that QB does not include key features needed for OLAP analysis, in previous work we have proposed an extension, denoted QB4OLAP, to overcome this problem without the need of modifying already published data. Once data cubes are appropriately represented on the SW, we need mechanisms to analyze them. However, in the current state-of-the-art, writing e cient analytical queries over SW data cubes demands a deep knowledge of standards like RDF and SPARQL. These skills are unlikely to be found in typical analytical users. Further, OLAP languages like MDX are far from being easily understood by the final user. The lack of friendly tools to exploit multidimensional data on the SW is a barrier that needs to be broken to promote the publication of such data. This is the problem we address in this paper. Our approach is based on allowing analytical users to write queries using what they know best: OLAP operations over data cubes, without dealing with SW technicalities. For this, we devised CQL (standing for Cube Query Language), a simple, high-level query language that operates over data cubes. Taking advantage of structural metadata provided by QB4OLAP, we translate CQL queries into SPARQL ones. Then, we propose query improvement strategies to produce e cient SPARQL queries, adapting general-purpose SPARQL query optimization techniques. We evaluate our implementation using the Star-Schema benchmark, showing that our proposal outperforms others. The QB4OLAP toolkit,a web application that allows exploring and querying (using CQL) SW data cubes, completes our contributions."artículo de publicación periódica.listelement.badge Pedestrian collective motion in competitive room evacuation(2017-12) Garcimartín, Ángel; Pastor, José Martín; Martín-Gómez, César; Parisi, Daniel; Zuriguel, Iker"When a sizable number of people evacuate a room, if the door is not large enough, an accumulation of pedestrians in front of the exit may take place. This is the cause of emerging collective phenomena where the density is believed to be the key variable determining the pedestrian dynamics. Here, we show that when sustained contact among the individuals exists, density is not enough to describe the evacuation, and propose that at least another variable-such as the kinetic stress-is required. We recorded evacuation drills with different degrees of competitiveness where the individuals are allowed to moderately push each other in their way out. We obtain the density, velocity and kinetic stress fields over time, showing that competitiveness strongly affects them and evidencing patterns which have been never observed in previous (low pressure) evacuation experiments. For the highest competitiveness scenario, we detect the development of sudden collective motions. These movements are related to a notable increase of the kinetic stress and a reduction of the velocity towards the door, but do not depend on the density."artículo de publicación periódica.listelement.badge Improving lazy abstraction for SCR specifications through constraint relaxation(2018-03) Degiovanni, Renzo; Ponzio, Pablo; Aguirre, Nazareno; Frías, Marcelo"Formal requirements specifications, eg, software cost reduction (SCR) specifications, are challenging to analyse using automated techniques such as model checking. Since such specifications are meant to capture requirements, they tend to refer to real-world magnitudes often characterized through variables over large domains. At the same time, they feature a high degree of nondeterminism, as opposed to other analysis contexts such as (sequential) program verification. This makes model checking of SCR specifications difficult even for symbolic approaches. Moreover, automated abstraction refinement techniques such as counterexample guided abstraction refinement fail in many cases in this context, since the concrete state space is typically large, and reaching specific states of interest may require complex executions involving many different states, causing these approaches to perform many abstraction refinements, and making them ineffective in practice. In this paper, an approach to tackle the above situation, through a 2-stage abstraction, is presented. The specification is first relaxed, by disregarding the constraints imposed in the specification by physical laws or by the environment, before being fed to a counterexample guided abstraction refinement procedure, tailored to SCR. By relaxing the original specification, shorter spurious counterexamples are produced, favouring the abstraction refinement through the introduction of fewer abstraction predicates. Then, when a counterexample is concretizable with respect to the relaxed (concrete) specification but it is spurious with respect to the original specification, an efficient though incomplete refinement step is applied to the constraints, to cause the removal of the spurious case. This approach is experimentally assessed, comparing it with related techniques in the verification of properties and in automated test case generation, using various SCR specifications drawn from the literature as case studies. The experiments show that this new approach runs faster and scales better to larger, more complex specifications than related techniques."artículo de publicación periódica.listelement.badge Automated workarounds from Java program specifications based on SAT solving(2018-11) Uva, Marcelo; Ponzio, Pablo; Regis, Germán; Aguirre, Nazareno; Frías, Marcelo"The failures that bugs in software lead to can sometimes be bypassed by the so-called workarounds: when a (faulty) routine fails, alternative routines that the system offers can be used in place of the failing one, to circumvent the failure. Existing approaches to workaround-based system recovery consider workarounds that are produced from equivalent method sequences, utomatically computed from user-provided abstract models, or directly produced from user-provided equivalent sequences of operations. In this paper, we present two techniques for computing workarounds from Java code equipped with formal specifications, that improve previous approaches in two respects. First, the particular state where the failure originated is actively involved in computing workarounds, thus leading to repairs that are more state specific. Second, our techniques automatically compute workarounds on concrete program state characterizations, avoiding abstract software models and user-provided equivalences. The first technique uses SAT solving to compute a sequence of methods that is equivalent to a failing method on a specific failing state, but which can also be generalized to schemas for workaround reuse. The second technique directly exploits SAT to circumvent a failing method, building a state that mimics the (correct) behaviour of a failing routine, from a specific program state too. We perform an experimental evaluation based on case studies involving implementations of collections and a library for date arithmetic, showing that the techniques can effectively compute workarounds from complex contracts in an important number of cases, in time that makes them feasible to be used for run-time repairs. Our results also show that our state-specific workarounds enable us to produce repairs in many cases where previous workaround-based approaches are inapplicable."artículo de publicación periódica.listelement.badge EEG waveform analysis of P300 ERP with applications to brain computer interfaces(2018-11) Ramele, Rodrigo; Villar, Ana Julia; Santos, Juan Miguel"The Electroencephalography (EEG) is not just a mere clinical tool anymore. It has become the de-facto mobile, portable, non-invasive brain imaging sensor to harness brain information in real time. It is now being used to translate or decode brain signals, to diagnose diseases or to implement Brain Computer Interface (BCI) devices. The automatic decoding is mainly implemented by using quantitative algorithms to detect the cloaked information buried in the signal. However, clinical EEG is based intensively on waveforms and the structure of signal plots. Hence, the purpose of this work is to establish a bridge to fill this gap by reviewing and describing the procedures that have been used to detect patterns in the electroencephalographic waveforms, benchmarking them on a controlled pseudo-real dataset of a P300-Based BCI Speller and verifying their performance on a public dataset of a BCI Competition."artículo de publicación periódica.listelement.badge Sampling from the 𝒢I0 distribution(2018-12) Chan, Debora; Rey, Andrea; Gambini, Juliana; Frery, Alejandro C."Synthetic Aperture Radar (SAR) images are widely used in several environmental applications because they provide information which cannot be obtained with other sensors. The 𝒢I0 distribution is an important model for these images because of its flexibility (it provides a suitable way for modeling areas with different degrees of texture, reflectivity and signal-to-noise ratio) and tractability (it is closely related to the Snedekor-F, Pareto Type II, and Gamma distributions). Simulated data are important for devising tools for SAR image processing, analysis and interpretation, among other applications. We compare four ways for sampling data that follow the 𝒢I0 distribution, using several criteria for assessing the quality of the generated data and the consumed processing time. The experiments are performed running codes in four different programming languages. The experimental results indicate that although there is no overall best method in all the considered programming languages, it is possible to make specific recommendations for each one."artículo de publicación periódica.listelement.badge Studying the evolution of content providers in IPv4 and IPv6 internet cores(2019) Carisimo, Esteban; Selmo, Carlos; Álvarez-Hamelin, Ignacio; Dhamdhere, Amogh"There is recent evidence that the core of the Internet, which was formerly dominated by large transit providers, has been reshaped after the transition to a multimedia-oriented network, first by general-purpose CDNs and now by private CDNs. In this work we use k-cores, an element of graph theory, to define which ASes compose the core of the Internet and to track the evolution of the core since 1999. Specifically, we investigate whether large players in the Internet content and CDN ecosystem belong to the core and, if so, since when. In addition, we examine differences between the IPv4 and IPv6 cores. We further investigate regional differences in the evolution of large content providers. Finally, we show that the core of the Internet has incorporated an increasing number of content ASes in recent years. To enable reproducibility of this work, we provide a website to allow interactive analysis of our datasets to detect, for example, ‘‘up and coming’’ ASes using customized queries."artículo de publicación periódica.listelement.badge Mapping spatiotemporal data to RDF: a SPARQL endpoint for Brussels(2019) Vaisman, Alejandro Ariel; Chentout, Kevin"This paper describes how a platform for publishing and querying linked open data for the Brussels Capital region in Belgium is built. Data are provided as relational tables or XML documents and are mapped into the RDF data model using R2RML, a standard language that allows defining customized mappings from relational databases to RDF datasets. In this work, data are spatiotemporal in nature; therefore, R2RML must be adapted to allow producing spatiotemporal Linked Open Data.Data generated in this way are used to populate a SPARQL endpoint, where queries are submitted and the result can be displayed on a map. This endpoint is implemented using Strabon, a spatiotemporal RDF triple store built by extending the RDF store Sesame. The first part of the paper describes how R2RML is adapted to allow producing spatial RDF data and to support XML data sources. These techniques are then used to map data about cultural events and public transport in Brussels into RDF. Spatial data are stored in the form of stRDF triples, the format required by Strabon. In addition, the endpoint is enriched with external data obtained from the Linked Open Data Cloud, from sites like DBpedia, Geonames, and LinkedGeoData, to provide context for analysis. The second part of the paper shows, through a comprehensive set of the spatial extension to SPARQL (stSPARQL) queries, how the endpoint can be exploited."artículo de publicación periódica.listelement.badge Mobility data warehouses(2019-04) Vaisman, Alejandro Ariel; Zimányi, Esteban"The interest in mobility data analysis has grown dramatically with the wide availability of devices that track the position of moving objects. Mobility analysis can be applied, for example, to analyze traffic flows. To support mobility analysis, trajectory data warehousing techniques can be used. Trajectory data warehouses typically include, as measures, segments of trajectories, linked to spatial and non-spatial contextual dimensions. This paper goes beyond this concept, by including, as measures, the trajectories of moving objects at any point in time. In this way, online analytical processing (OLAP) queries, typically including aggregation, can be combined with moving object queries, to express queries like “List the total number of trucks running at less than 2 km from each other more than 50% of its route in the province of Antwerp” in a concise and elegant way. Existing proposals for trajectory data warehouses do not support queries like this, since they are based on either the segmentation of the trajectories, or a pre-aggregation of measures. The solution presented here is implemented using MobilityDB, a moving object database that extends the PostgresSQL database with temporal data types, allowing seamless integration with relational spatial and non-spatial data. This integration leads to the concept of mobility data warehouses. This paper discusses modeling and querying mobility data warehouses, providing a comprehensive collection of queries implemented using PostgresSQL and PostGIS as database backend, extended with the libraries provided by MobilityDB."artículo de publicación periódica.listelement.badge An evolutionary approach to translating operational specifications into declarative specifications(2019-07) Molina, Facundo; Cornejo, César; Degiovanni, Renzo; Regis, Germán; Castro, Pablo; Aguirre, Nazareno; Frías, Marcelo"Various tools for program analysis, including run-time assertion checkers and static analyzers such as verification and test generation tools, require formal specifications of the programs being analyzed. Moreover, many of these tools and techniques require such specifications to be written in a particular style, or follow certain patterns, in order to obtain an acceptable performance from the corresponding analyses. Thus, having a formal specification sometimes is not enough for using a particular technique, since such specification may not be provided in the right formalism. In this paper, we deal with this problem in the increasingly common case of having an operational specification, while for analysis reasons requiring a declarative specification. We propose an evolutionary approach to translate an operational specification written in a sequential programming language, into a declarative specification, in relational logic. We perform experiments on a benchmark of data structure implementations, for which operational invariants are available, and show that our evolutionary computation based approach to translating specifications achieves very good precision in this context, and produces declarative specifications that are more amenable to analyses that demand specifications in this style. This is assessed in two contexts: bounded verification of data structure invariant preservation, and instance enumeration using symbolic execution aided by tight bounds."artículo de publicación periódica.listelement.badge Histogram of gradient orientations of signal plots applied to P300 detection(2019-07) Ramele, Rodrigo; Villar, Ana Julia; Santos, Juan Miguel"The analysis of Electroencephalographic (EEG) signals is of ulterior importance to aid in the diagnosis of mental disease and to increase our understanding of the brain. Traditionally, clinical EEG has been analyzed in terms of temporal waveforms, looking at rhythms in spontaneous activity, subjectively identifying troughs and peaks in Event-Related Potentials (ERP), or by studying graphoelements in pathological sleep stages. Additionally, the discipline of Brain Computer Interfaces (BCI) requires new methods to decode patterns from non-invasive EEG signals. This field is developing alternative communication pathways to transmit volitional information from the Central Nervous System. The technology could potentially enhance the quality of life of patients affected by neurodegenerative disorders and other mental illness. This work mimics what electroencephalographers have been doing clinically, visually inspecting, and categorizing phenomena within the EEG by the extraction of features from images of signal plots. These features are constructed based on the calculation of histograms of oriented gradients from pixels around the signal plot. It aims to provide a new objective framework to analyze, characterize and classify EEG signal waveforms. The feasibility of the method is outlined by detecting the P300, an ERP elicited by the oddball paradigm of rare events, and implementing an offline P300-based BCI Speller. The validity of the proposal is shown by offline processing a public dataset of Amyotrophic Lateral Sclerosis (ALS) patients and an own dataset of healthy subjects."artículo de publicación periódica.listelement.badge Analytical queries on semantic trajectories using graph databases(2019-10) Gómez, Leticia Irene; Kuijpers, Bart; Vaisman, Alejandro Ariel"This article studies the analysis of moving object data collected by location-aware devices, such as GPS, using graph databases. Such raw trajectories can be transformed into so-called semantic trajectories, which are sequences of stops that occur at “places of interest.” Trajectory data analysis can be enriched if spatial and non-spatial contextual data associated with the moving objects are taken into account, and aggregation of trajectory data can reveal hidden patterns within such data. When trajectory data are stored in relational databases, there is an “impedance mismatch” between the representation and storage models. Graphs in which the nodes and edges are annotated with properties are gaining increasing interest to model a variety of networks. Therefore, this article proposes the use of graph databases (Neo4j in this case) to represent and store trajectory data, which can thus be analyzed at different aggregation levels using graph query languages (Cypher, for Neo4j). Through a real-world public data case study, the article shows that trajectory queries are expressed more naturally on the graph-based representation than over the relational alternative, and perform better in many typical cases."
- «
- 1 (current)
- 2
- 3
- »