New papers of our group

Empirical economics-Exploring the influence of industries and randomness in stock prices-This study explores the behavior of time series of historical prices and makes two additional contributions to the literature. In summarized form, we present an overview of each of the financial theories that discuss the movements of stock prices and their connection with industry trends. Within this theoretical framework, we first propose that prices be distinguished by following stock prices and a random-walk approach, and second, that the analysis of historical prices be broken down by industries. Similarities among price series are extracted through a clustering methodology based on an approach to non-computable Kolmogorov complexity. We model price series by following geometric Brownian motion and compare them to historical series of stock prices. Our first contribution confirms the existence of hidden common patterns in time series of historical prices that are clearly distinguishable from simulated series. The second contribution claims strong connections among firms carrying out similar industrial activities. The results confirm that stock prices belonging to the same industry behave similarly, whereas they behave differently from those of firms in other industries. Our research sheds new light on the stylized feature of the non-randomness of stock prices by pointing at fundamental aspects related to the industry as partial explanatory factors behind price movements.

GPEM- This study presents the implementation of an automated trading system that uses three critical analyses to determine time-decisions and portfolios for investment. The approach is based on a meta-grammatical evolution methodology that combines technical, fundamental and macroeconomic analysis on a hybrid top-down paradigm. First, the method provides a low-risk portfolio by analyzing countries and industries. Next, aiming to focus on the most robust companies, the system filters the portfolio by analyzing their economic variables. Finally, the system analyzes prices and volumes to optimize investment decisions during a given period. System validation involves a series of experiments in the European financial markets, which are reflected with a data set of over nine hundred companies. The final solutions have been compared with static strategies and other evolutionary implementations and the results show the effectiveness of the proposal.

A hybrid automated trading system based on multi-objective grammatical evolution Journal of Intelligent and Fuzzy Systems. Volume XX, XXXX 2016, Pages XXX–XXX.


This paper describes a hybrid automated trading system (ATS) based on grammatical evolution and microeconomic analysis. The proposed system takes advantage from the flexibility of grammars for introducing and testing novel characteristics. The ATS introduces the self-generation of new technical indicators and multi-strategies for stopping unforeseen losses. Additionally, this work copes with a novel optimization method combining multi-objective optimization with a grammatical evolution methodology. We implemented the ATS testing three different fitness functions under three mono-objective approaches and also two multi-objective ATSs. Experimental results test and compare them to the Buy and Hold strategy and a previous approach, beating both in returns and in number of positive operations. In particular, the multi-objective approach demonstrated returns up to 20% in very volatile periods, proving that the combination of fitness functions is beneficial for the ATS.

Using Evolutionary Algorithms to determine the residual stress profile across welds of age-hardenable aluminum alloys Applied Soft Computing. Volume 40, March 2016, Pages 429–438.


This paper presents an evolutionary based method to obtain the un-stressed lattice spacing, d0, required to calculate the residual stress profile across a weld of an age-hardenable aluminum alloy, AA2024. Due to the age-hardening nature of this alloy, the d0 value depends on the heat treatment. In the case of welds, the heat treatment imposed by the welding operation differs significantly depending on the distance to the center of the joint. This implies that a variation of d0 across the weld is expected, a circumstances which limits the possibilities of conventional analytical methods to determine the required d0 profile. The interest of the paper is, therefore, two-fold: First, to demonstrate that the application of an evolutionary algorithm solves a problem not addressed in the literature such as the determination of the required data to calculate the residual stress state across a weld. Second, to show the robustness of the approximation used, which allows obtaining solutions for different constraints of the problem. Our results confirm the capacity of evolutionary computation to reach realistic solutions under three different scenarios of the initial conditions and the available experimental data.

Modeling and predicting the Spanish Bachillerato academic results over the next few years using a random network model. Physica A: Statistical Mechanics and its Applications.


Academic performance is a concern of paramount importance in Spain, where around of30% of the students in the last two courses in high school, before to access to the labor market or to the university, do not achieve the minimum knowledge required according to the Spanish educational law in force. In order to analyze this problem, we propose a random network model to study the dynamics of the academic performance in Spain. Our approach is based on the idea that both, good and bad study habits, are a mixture of personal decisions and influence of classmates. Moreover, in order to consider the uncertainty in the estimation of model parameters, we perform a lot of simulations taking as the model parameters the ones that best fit data returned by the Differential Evolution algorithm. This technique permits to forecast model trends in the next few years using confidence intervals.

Thermal-aware floorplanner for 3D IC, including TSVs, liquid microchannels and thermal domains optimization. Applied Soft Computing. Volume 34, July 2015, pages 164–177


3D stacked technology has emerged as an effective mechanism to overcome physical limits and communication delays found in 2D integration. However, 3D technology also presents several drawbacks that prevent its smooth application. Two of the major concerns are heat reduction and power density distribution. In our work, we propose a novel 3D thermal-aware floorplanner that includes: (1) an effective thermal-aware process with three different evolutionary algorithms that aim to solve the soft computing problem of optimizing the placement of functional units and through silicon vias, as well as the smooth inclusion of active cooling systems and new design strategies, (2) an approximated thermal model inside the optimization loop, (3) an optimizer for active cooling (liquid channels), and (4) a novel technique based on air channel placement designed to isolate thermal domains have been also proposed. The experimental work is conducted for a realistic many-core single-chip architecture based on the Niagara design. Results show promising improvements of the thermal and reliability metrics, and also show optimal scaling capabilities to target future-trend many-core systems.

Optimizing L1 cache for embedded systems through grammatical evolution – Soft Computing


Nowadays, embedded systems are provided with cache memories that are large enough to influence in both performance and energy consumption as never occurred before in this kind of systems. In addition, the cache memory system has been identified as a component that improves those metrics by adapting its configuration according to the memory access patterns of the applications being run. However, given that cache memories have many parameters which may be set to a high number of different values, designers are faced with a wide and time-consuming exploration space. In this paper, we propose an optimization framework based on Grammatical Evolution (GE) which is able to efficiently find the best cache configurations for a given set of benchmark applications. This metaheuristic allows an important reduction of the optimization runtime obtaining good results in a low number of generations. Besides, this reduction is also increased due to the efficient storage of evaluated caches. Moreover, we selected GE because the plasticity of the grammar eases the creation of phenotypes that form the call to the cache simulator required for the evaluation of the different configurations. Experimental results for the Mediabench suite show that our proposal is able to find cache configurations that obtain an average improvement of 62 % versus a real world baseline configuration.

Modeling glycemia in humans by means of Grammatical Evolution. Applied Soft Computing. Volume 20, July 2014, Pages 40–53


Diabetes mellitus is a disease that affects to hundreds of millions of people worldwide. Maintaining a good control of the disease is critical to avoid severe long-term complications. In recent years, several artificial pancreas systems have been proposed and developed, which are increasingly advanced. However there is still a lot of research to do. One of the main problems that arises in the (semi) automatic control of diabetes, is to get a model explaining how glycemia (glucose levels in blood) varies with insulin, food intakes and other factors, fitting the characteristics of each individual or patient. This paper proposes the application of evolutionary computation techniques to obtain customized models of patients, unlike most of previous approaches which obtain averaged models. The proposal is based on a kind of genetic programming based on grammars known as Grammatical Evolution (GE). The proposal has been tested with in silico patient data and results are clearly positive. We present also a study of four different grammars and five objective functions. In the test phase the models characterized the glucose with a mean percentage average error of 13.69%, modeling well also both hyper and hypoglycemic situations.

glUCModel: A monitoring and modeling system for chronic diseases applied to diabetes.Journal of Biomedical Informatics. Volume 48, April 2014, Pages 183–192.


Chronic patients must carry out a rigorous control of diverse factors in their lives. Diet, sport activity, medical analysis or blood glucose levels are some of them. This is a hard task, because some of these controls are performed very often, for instance some diabetics measure their glucose levels several times every day, or patients with chronic renal disease, a progressive loss in renal function, should strictly control their blood pressure and diet. In order to facilitate this task to both the patient and the physician, we have developed a web application for chronic diseases control which we have particularized to diabetes. This system, called glUCModel, improves the communication and interaction between patients and doctors, and eventually the quality of life of the former. Through a web application, patients can upload their personal and medical data, which are stored in a centralized database. In this way, doctors can consult this information and have a better control over patient records. glUCModel also presents three novelties in the disease management: a recommender system, an e-learning course and a module for automatic generation of glucose levels model. The recommender system uses Case Based Reasoning. It provides automatic recommendations to the patient, based on the recorded data and physician preferences, to improve their habits and knowledge about the disease. The e-learning course provides patients a space to consult information about the illness, and also to assess their own knowledge about the disease. Blood glucose levels are modeled by means of evolutionary computation, allowing to predict glucose levels using particular features of each patient. glUCModel was developed as a system where a web layer allows the access of the users from any device connected to the Internet, like desktop computers, tablets or mobile phones.

Real time evolvable hardware for optimal reconfiguration of cusp-like pulse shapers. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. Volume 763, 1 November 2014, Pages 124–131.


The design of a cusp-like digital pulse shaper for particle energy measurements requires the definition of four parameters whose values are defined based on the nature of the shaper input signal (timing, noise, …) provided by a sensor. However, after high doses of radiation, sensors degenerate and their output signals do not meet the original characteristics, which may lead to erroneous measurements of the particle energies. We present in this paper an evolvable cusp-like digital shaper, which is able to auto-recalibrate the original hardware implementation into a new design that match the original specifications under the new sensor features.

A methodology to automatically optimize dynamic memory managers applying grammatical evolution. Journal of Systems and Software. Volume 91, May 2014, Pages 109–123.


Modern consumer devices must execute multimedia applications that exhibit high resource utilization. In order to efficiently execute these applications, the dynamic memory subsystem needs to be optimized. This complex task can be tackled in two complementary ways: optimizing the application source code or designing custom dynamic memory management mechanisms. Currently, the first approach has been well established, and several automatic methodologies have been proposed. Regarding the second approach, software engineers often write custom dynamic memory managers from scratch, which is a difficult and error-prone work. This paper presents a novel way to automatically generate custom dynamic memory managers optimizing both performance and memory usage of the target application. The design space is pruned using grammatical evolution converging to the best dynamic memory manager implementation for the target application. Our methodology achieves important improvements (62.55% and 30.62% better on average in performance and memory usage, respectively) when its results are compared to five different general-purpose dynamic memory managers.

Real-time evolvable pulse shaper for radiation measurements. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. Volume 727, 1 November 2013, Pages 73–83


In the last two decades, recursive algorithms for real-time digital pulse shaping in pulse height measurements have been developed and published in number of articles and textbooks. All these algorithms try to synthesize in real time optimum or near optimum shapes in the presence of noise. Even though some of these shapers can be considered effective designs, some side effects like aging cannot be ignored. We may observe that after sensors degradation, the signal obtained is not valid. In this regard, we present in this paper a novel technique that, based on evolvable hardware concepts, is able to evolve the degenerated shaper into a new design with better performance than the original one under the new sensor features.

Comparative study of meta-heuristic 3D floorplanning algorithms. Neurocomputing —- Constant necessity of improving performance has brought the invention of 3D chips. The improvement is achieved due to the reduction of wire length, which results in decreased interconnection delay. However, 3D stacks have less heat dissipation due to the inner layers, which leads to increased temperature and the appearance of hot spots. This problem can be mitigated through appropriate floorplanning. For this reason, in this work we present and compare five different solutions for floorplanning of 3D chips. Each solution uses a different representation, and all are based on meta-heuristic algorithms, namely three of them are based on simulated annealing, while two other are based on evolutionary algorithms. The results show great capability of all the solutions in optimizing temperature and wire length, as they all exhibit significant improvements comparing to the benchmark floorplans.

3D thermal-aware floorplanner using a MOEA approximation. Integration, the VLSI Journal Volume 46, Issue 1, January 2013, Pages 10–21


Two of the major concerns in 3D stacked technology are heat removal and power density distribution. In our work, we propose a novel 3D thermal-aware floorplanner. Our contributions include:(1)A novel multi-objective formulation to consider the thermal and performance constraints in the optimization approach. (2) Two efficient Multi-Objective Evolutionary Algorithm (MOEA) for the representation of the floorplanning model and for the optimization of thermal parameters and wire length. (3) A smooth integration of the MOEA model with an accurate thermal modeling of the architecture. The experimental work is conducted for two realistic many-core single-chip architectures: an homogeneous system resembling INTEL's SCC, and an improved heterogeneous setup. The results show promising improvements of the mean and peak temperature, as well as the thermal gradient, with a reduced overhead in the wire length of the system.

Blind optimisation problem instance classification via enhanced universal similarity metric – Memetic Computing


The ultimate aim of Memetic Computing is the fully autonomous solution to complex optimisation problems. For a while now, the Memetic algorithms literature has been moving in the direction of ever increasing generalisation of optimisers initiated by seminal papers such as Krasnogor and Smith (IEEE Trans 9(5):474–488, 2005; Workshops Proceedings of the 2000 International Genetic and Evolutionary Computation Conference (GECCO2000), 2000), Krasnogor and Gustafson (Advances in nature-inspired computation: the PPSN VII Workshops 16(52), 2002) and followed by related and more recent work such as Ong and Keane (IEEE Trans Evol Comput 8(2):99–110, 2004), Ong et al. (IEEE Comp Int Mag 5(2): 24–31, 2010), Burke et al. (Hyper-heuristics: an emerging direction in modern search technology, 2013). In this recent trend to ever greater generalisation and applicability, the research has focused on selecting (or even evolving), the right search operator(s) to use when tackling a given instance of a fixed problem type (e.g. Euclidean 2D TSP) within a range of optimisation frameworks (Krasnogor, Handbook of natural computation, Springer, Berlin/Heidelberg, 2009). This paper is the first step up the generalisation ladder, where one assumes that the optimiser is given (perhaps by other solvers who do not necessarily know how to deal with a given problem instance) a problem instance to tackle and it must autonomously and without human intervention pre-select which is the likely family class of problems the instance belongs to. In order to do that we propose an Automatic Problem Classifier System able to identify automatically which kind of instance or problem the system is dealing with. We test an innovative approach to the Universal Similarity Metric, as a variant of the normalised compression distance (NCD), to classify different problem instances. This version is based on the management of compression dictionaries. The results obtained are encouraging as we achieve a 96 % average classification success with the studied dataset.

Simulation of high-performance memory allocators Microprocessors and Microsystems, Volume 35, Issue 8, November 2011, Pages 755-765


For the last 30 years, a large variety of memory allocators have been proposed. Since performance, memory usage and energy consumption of each memory allocator differs, software engineers often face difficult choices in selecting the most suitable approach for their applications. To this end, custom allocators are developed from scratch, which is a difficult and error-prone process. This issue has special impact in the field of portable consumer embedded systems, that must execute a limited amount of multimedia applications, demanding high performance and extensive memory usage at a low energy consumption. This paper presents a flexible and efficient simulator to study Dynamic Memory Managers (DMMs), a composition of one or more memory allocators. This novel approach allows programmers to simulate custom and general DMMs, which can be composed without incurring any additional runtime overhead or additional programming cost. We show that this infrastructure simplifies DMM construction, mainly because the target application does not need to be compiled every time a new DMM must be evaluated and because we propose a structured method to search and build DMMs in an object-oriented fashion. Within a search procedure, the system designer can choose the “best” allocator by simulation for a particular target application and embedded system. In our evaluation, we show that our scheme delivers better performance, less memory usage and less energy consumption than single memory allocators.

Back to top
new_papers_of_our_group.txt · Last modified: 2017/07/20 08:44 by J. Ignacio Hidalgo