Our technical reports are published as Technical Reports by IMDEA Networks (ISSN to be determined). We employ our TRs as a means of time-stamping as yet unpublished scientific work: when one of our papers is submitted to a conference or journal for peer review, the authors release a dated TR version of the paper through this webpage.
Volumen 2018 
- Yonas Mitike Kassa, Rubén Cuevas, Ángel Cuevas (October 2018)
A large-scale analysis of Facebook’s user-base and user engagement growth [PDF ]
Understanding the evolution of the user-base as well as user engagement of online services is critical not only for the service operators but also for customers, investors, and users. While we can find research works addressing this issue in online services such as Twitter, MySpace or Google+, such detailed analysis is missing for Facebook, which is currently the largest online social network. This paper presents the first detailed study on the demographic and geographic composition and evolution of the user-base and user engagement in Facebook over a period of three years. To this end, we have implemented a measurement methodology that leverages the marketing API of Facebook to retrieve actual information about the number of total users and the number of daily active users across 230 countries and age groups ranging between 13 and 65+. The conducted analysis reveals that Facebook is still growing and geographically expanding. Moreover, the growth pattern is heterogeneous across age groups, genders, and geographical regions. In particular, from a demography perspective, Facebook shows the lowest growth pattern among adolescents. Gender based analysis showed that growth among men is still higher than growth in women. Our geographical analysis reveals that while Facebook growth is slower in western countries, it presents fastest growth in developing countries mainly located in Africa and Central Asia, analyzing the penetration of these countries also shows that these countries are at earlier stages of Facebook penetration. Leveraging external socioeconomic datasets we also showed that this heterogeneous growth can be characterized by indicators such as availability and access to internet, Facebook popularity, and factors related with population growth and gender inequality.
- Joan Palacios, Daniel Steinmetzer, Adrian Loch, Matthias Hollick, Joerg Widmer (July 2018)
Addendum to Adaptive Codebook Optimization for Beam Training on Off-The-Shelf IEEE 802.11ad Devices [PDF ]
This technical report is an extension to the paper Adaptive Codebook Optimization for Beam Training on Off-The-Shelf IEEE 802.11ad Devices . It provides additional information and more detailed steps for the mathematical derivations in the paper.
Volumen 2016 
- Hardy Halbauer, Patrik Rugeland, Yilin Li, Joerg Widmer, Marcin Rybakowski, Krystian Safjan, Arnesh Vijay, Isabelle Siaud, Anne-Marie Ulmer-Moll, David Gutierrez-Estevez, Mehrdad Shariat, Maciej Soszka (April 2016)
Architectural aspects of mm-wave radio access integration with 5G ecosystem [PDF ]
Volumen 2015 
- Kirill Kogan, Danushka Menikkumbura, Gustavo Petri, Youngtae Noh, Sergey Nikolenko, Patrick Eugster (October 2015)
BASEL (Buffering Architecture SpEcification Language) [PDF ]
Buffering architectures and policies for their efficient management constitute one of the core ingredients of a network architecture. In this work we introduce a new specification language, BASEL, that allows to express virtual buffering architectures and management policies representing a variety of economic models. BASEL does not require the user to implement policies in a high-level language; rather, the entire buffering architecture and its policy are reduced to several comparators and simple functions. We show examples of buffering architectures in BASEL and demonstrate empirically the impact of various settings on performance.
- Nicola Bui, Stefan Valentin, Joerg Widmer (April 2015)
Anticipatory Quality-Resource Allocation for Multi-User Mobile Video Streaming [PDF ]
Mobile video delivery forms the largest part of the traffic in cellular networks. Thus optimizing the resource allocation to satisfy a user's quality of experience is becoming paramount in modern communications. This paper belongs to the line of research known as anticipatory networking that makes use of prediction of wireless capacity to improve communication performance. In particular, we focus on the problem of optimal resource allocation for steady video delivery under maximum average quality constraints for multiple users. We formulate the problem as a piecewise linear program and provide a heuristic algorithm, which solution is close to optimal. Based on our formulation we are now able to trade off minimum video quality, average quality and offered network capacity.
Volumen 2014 
- Evgenia Christoforou, Antonio Fernández Anta, Chryssis Georgiou, Miguel A. Mosteiro, Ángel Sánchez (August 2014)
Reputation-Based Mechanisms for Reliable Crowdsourcing Computation [PDF ]
We consider an Internet-based Master-Worker framework, for machine-oriented computing tasks (i.e. SETI@home) or human intelligence tasks (i.e. Amazon’s Mechanical Turk). In this framework a master sends tasks to unreliable workers, and the workers execute and report back the result. We model such computations using evolutionary dynamics and consider three type of workers: altruistic, malicious and rational. Altruistic workers always return the correct result, malicious workers always return an incorrect result, and rational (selfish) workers decide whether to be truthful depending on what increases their benefit. The goal of the master is reaching eventual correctness, that is, a stable state of the system in which it always obtains the correct results. To this respect, we propose a mechanism that uses reinforcement learning to induce a correct behavior to rational workers; coping with malice leveraging reputation schemes. We analyze our system as a Markov chain and we give provable guarantees under which truthful behavior can be ensured. Simulation results, obtained using parameter values similar to the values observed in real systems, reveal interesting trade-offs between various metrics and parameters, such as cost, time of convergence to a truthful behavior, tolerance to cheaters and the type of reputation metric employed.
- Arash Asadi, Vincenzo Mancuso, Peter Jacko (July 2014)
Cooperative Device-to-Device Communications Achieve Maximum Throughput and Maximum Fairness in Cellular Networks [PDF ]
Opportunistic schedulers such as MaxRate and Proportional Fair are known for trading off throughput and fairness of users in cellular networks. In this paper we show how to achieve maximum fairness without sacrificing throughput. We propose a novel solution that integrates opportunistic scheduling design principles and cooperative device-to-device communication capabilities in order to improve both fairness and capacity in cellular networks. We develop a mathematical approach and design a smart tie-breaking mechanism which enhances the fairness achieved by the MaxRate scheduler. We show that users that cooperatively form clusters benefit from both higher throughput and fairness. Our scheduling mechanism is simple to implement and scales linearly with the number of clusters, and is able to achieve equal or better fairness than Proportional Fair schedulers.
- Arash Asadi, Peter Jacko, Vincenzo Mancuso (May 2014)
Modeling Multi-mode D2D Communications in LTE [PDF ]
In this work we propose a roadmap towards the analytical understanding of Device-to-Device (D2D) communications in LTE-A networks. Various D2D solutions have been proposed, which include inband and outband D2D transmission modes, each of which exhibits different pros and cons in terms of complexity, interference, and spectral efficiency achieved. We go beyond traditional mode optimization and mode-selection schemes. Specifically, we formulate a general problem for the joint per-user mode selection, connection activation and resource scheduling of connections.
- Vittorio Cozzolino (February 2014)
Design and implementation of and Android context-aware application based on Floating Content [PDF ]
Communication and information are two concepts that cannot be separated. Right now we are in the middle of the the Information Age (also known as the Computer Age, Digital Age, or New Media Age), a period in human history characterized by the shift from traditional industry that the industrial revolution brought through industrialization, to an economy based on the information computerization. During this Information Age, we saw how the way of exchanging information mutated and evolved towards more flexible, dynamic and infrastructure-less means with a transition spanning from the advent of the personal computer in the late 1970s, to the Internet's reaching a critical mass in the early 1990s, and finally to the smart-phones, with a widespread public application started late 2000s. We started to feel the urge to be "always connected" and so smart phones were born to fulfil our needs. They brought into the hands of every person the chance to access the Internet, share their experiences and feelings, upload photo and videos, playing online games and so on, whenever they had the chance to. Content sharing via the Internet became a widespread means for people to foster their relationships irrespective of physical distance and smart phones supplied the perfect mean the exchange information in a mobile environment. Right now, smarter mobile devices continue to dominate worldwide and Android is at the forefront. Total smart mobile device shipments worldwide grew by 37.4 percent annually during the first quarter to approximately 308.7 million units, according to the market insight firm. Overall, Android remains on top as 59.5 percent of all smart mobile devices shipped last quarter were running Google's mobile operating system. The growth of mobile computing, and the pervasiveness of smart user devices is progressively driving applications towards context-awareness, i.e., towards applications and services that allow users to exploit "any information that can be used to characterize the situation of an entity". But relying on infrastructure based networks for location-aware services may often not be desirable, while they are still essential to overcome distances and connect people around the world. By exploiting the diffusion and flexibility of the Android OS combined with the globally-adopted Bluetooth technology, I developed a context-aware, infrastructure-less, application focused on content sharing, solely dependent on the mobile devices in the vicinity using principles of opportunistic networking. The net result is a best effort application for floating content in which: 1) information dissemination is geographically limited; 2) the lifetime and spreading of information depends on interested nodes being available; 3) traffic can only be created and caused locally; and 4) content can only be added, but not deleted. This thesis is structured as follows: Chapter 1. Introduction on social media and content sharing. Chapter 2. Floating content networks, description and applications. Chapter 3. Bluetooth, characteristic and protocol overview. Chapter 4. Presenting "Floaty", an Android application for floating content networks. Chapter 5. Performance evaluation and tests results. Chapter 6. Future works and conclusions.
Volumen 2013 
- Adrian Loch, Thomas Nitsche, Alexander Kuehne, Matthias Hollick, Joerg Widmer, Anja Klein (December 2013)
Practical Challenges of IA in Frequency [PDF ]
Interference Alignment (IA) is a promising technique at the physical layer which allows to increase the Degree-of-Freedom (DOF) of a communication by aligning all interfering signals into the same dimension, while the desired signal lies unaffected in an orthogonal dimension. IA has been widely studied in theory, but only limited practical work exists, since it poses significant challenges for real-world deployments. In this report, we study issues which are key to enable IA in practice.
- Pablo Salvador, Luca Cominardi, Francesco Gringoli, Pablo Serrano (August 2013)
Implementations details of the IEEE 802.11aa Group Addressed Transmission Service [PDF ]
In this report we explain and describe the implementation of the 802.11aa protocol and every aspect that concerns the choices selected and their motivations. We detail all the required modifications at the driver and firmware levels. We also motivate the choice of the implementation platform.
- Peter Perešíni, Dejan Kostic (June 2013)
Is the Network Turing-Complete? EPFL Technical Report 187131 [PDF ]
Ensuring correct network behavior is hard. This is the case even for simple networks, and adding middleboxes only complicates this task. In this paper, we demonstrate a fundamental property of networks. Namely, we show a way of using a network to emulate the Rule 110 cellular automaton. We do so using just a set of network devices with simple features such as packet matching, header rewriting and round-robin loadbalancing. Therefore, we show that a network can emulate any Turing machine. This ultimately means that analyzing dynamic network behavior can be as hard as analyzing an arbitrary program. Analyzing a network containing middleboxes is already understood to be hard. Our result shows that using even only statically configured switches can make the problem intractable.
- Shahzad Ali, Gianluca Rizzo, Balaji Rengarajan, Marco Ajmone Marsan (May 2013)
Simple Approximate Analysis of Floating Content for Context-Aware Applications [PDF ]
Context-awareness is a peculiar characteristic of an ever expanding set of applications that make use of a combination of restricted spatio-temporal locality and mobile communications, to deliver a variety of services to the end user. Communication requirements for context-aware applications significantly differ from those of ordinary applications; opportunistic communications are extremely well-suited to them, because they naturally incorporate context. Recently, an opportunistic communication paradigm called "Floating Content" was proposed, which is conceived to support serverless, distributed content sharing. In this work, we present a simple (in that it uses few primitive system parameters), approximate analytical model for the performance analysis of context-aware applications that use floating content. From a system design perspective, our analysis can be used to tune key system parameters so as to achieve the desired application performance. In particular, we apply our analysis to estimate the "success probability" for two representative categories of context-aware applications, and show how the system can be configured to achieve the application’s target. In order to complement our analytical study, we validate our model using extensive simulations under different settings and mobility patterns. Our simulation results show that our model-based predictions are indeed highly accurate under a wide range of conditions.
Volumen 2012 
- Andres Garcia-Saavedra, Balaji Rengarajan, Pablo Serrano, Daniel Camps-Mur, Xavier Costa-Perez (December 2012)
SOLOR: Self-Optimizing WLANs with Legacy-Friendly Opportunistic Relays [PDF ]
Current IEEE 802.11 WLANs suffer from the well- known rate anomaly problem, which can drastically reduce network performance. Opportunistic relaying can address this problem, but three major considerations, typically considered separately by prior work, need to be taken into account for an efficient deployment in real-world systems: 1) relaying could imply increased power consumption, and nodes might be hetero- geneous, both in power source (e.g., battery-powered vs. socket- powered) and power consumption profile; 2) similarly, nodes in the network are expected to have heterogeneous throughput needs and preferences in terms of the throughput vs. energy consumption trade-off; and 3) any proposed solution should be backwards-compatible, given the large number of legacy 802.11 devices already present in existing networks. In this paper, we propose a novel framework, Self-Optimizing, Legacy-Friendly Opportunistic Relaying (SOLOR), which jointly takes into account the above considerations and greatly improves network performance even in systems comprised mostly of vanilla nodes and unmodified access points. SOLOR jointly optimizes the topology of the network, i.e., which are the nodes associated to each relay-capable node; and the relay schedules, i.e., how the relays split time between the downstream nodes they relay for and the upstream flow to an access point. The results, obtained for a large variety of scenarios and different node preferences, illustrate the significant gains achieved by our approach. Its feasibility is demonstrated through test-bed experimentation in a realistic deployment.
- Qing Wang, Balaji Rengarajan (September 2012)
Recouping Opportunistic Gain in Dense Base Station Layouts Through Energy-Aware User Cooperation [PDF ]
To meet the increasing demand for wireless capacity, future networks are likely to consist of dense layouts of small cells. Thus, the number of concurrent users served by each base station is likely to be small resulting in diminished gains from opportunistic scheduling, particularly under dynamic traffic loads. We propose user-initiated traffic spreading, that is transparent to base stations, in order to extract higher opportunistic gain and improve downlink performance. For a specified tradeoff between energy consumption and performance, we characterize the optimal policy by modelling the system as a Markov decision process and also present a tractable heuristic that yields significant performance gains even in multi-user scenarios. Our simulations show that, in the performance-centric case, average delays can be lowered by up to 25% even in homogeneous scenarios where users have identical channel distribution, and up to 51% with heterogeneous users. Further, we show that the bulk of the performance improvement can be achieved with very small increase in energy consumption, e.g., in an energy-sensitive scenario, up to 73% of the performance improvement can typically be achieved at 14% of the energy cost of the performance-centric case.
- Juan Camilo Cardona, Rade Stanojevic, Rubén Cuevas (September 2012)
On Weather and Internet Traffic Demand - Technical Report [PDF ]
The weather is known to have a major impact on demand of utilities such as electricity or gas. Given that the Internet usage is strongly tied with human activity, one could guess the existence of similar correlation between its traffic demand and the weather conditions. In this paper, empirical in nature, we demonstrate and quantify such correlation between weather conditions and the Internet traffic demand on different time-scales (from hourly to yearly). For that purpose we collect and use the data from 8 Internet eXchange Points (IXP), geographically spread on 5 different continents, as indicators of the Internet demand in those particular areas. We observe that the seasonal traffic demand variability exists in the locations with large yearly variations in temperature, while the traffic demand in locations close to the equator (with low variability of temperature) is season independent. Using a fine-grain dataset, from three European IXPs, we show that precipitation increases the traffic demand for up to 6%, and somewhat surprisingly that in regards to the impact of precipitation on the demand all major types of ISPs (mobile, residential, content, etc.) observe very similar behavior. One of the implications of the observed time-of-the-day dependent impact of the precipitation is that precipitation has a mild impact on the IP transit costs. Finally, we hint on the possible benefits of the seasonal variations on the energy-proportional computing and scheduling large-scale software releases.
Volumen 2011 
- Syed Hasan, Sergey Gorinsky (December 2011)
Obscure Giants: Detecting the Provider-Free ASes [PDF ]
Internet routing depends on economic relationships between ASes (Autonomous Systems). Despite extensive prior research of these relationships, their characterization remains imprecise. In this paper, we focus on provider-free ASes that reach the entire Internet without paying anyone for the traffic delivery. While the ground truth about PFS (set of the provider-free ASes) lies outside the public domain, we use trustworthy non-verifiable sources as a baseline for result validation. Straightforward extraction of PFS from public datasets of inter-AS economic relationships yields poor results. Then, we develop a more sophisticated Temporal Cone (TC) algorithm that relies on topological statistics (customer cones of ASes) and exploits the temporal diversity of the datasets. Our evaluation shows that the TC algorithm infers PFS from the same public datasets with a significantly higher accuracy. We also assess the sensitivity of the TC algorithm to its parameters.
- Rade Stanojevic, Ignacio Castro, Sergey Gorinsky (October 2011)
CIPT: Using Tuangou to Reduce IP Transit Costs [PDF ]
A majority of ISPs (Internet Service Providers) support connectivity to the entireInternet by transiting their traffic via other providers. Although the transit prices per Mbps decline steadily, the overall transit costs of these ISPs remain high or even increase, due to the traffic growth. The discontent of the ISPs with the high transit costs has yielded notable innovations such as peering, content distribution networks, multicast, and peer-to-peer localization. While the above solutions tackle the problem by reducing the transit traffic, this paper explores a novel approach that reduces the transit costs without altering the traffic. In the proposed CIPT (Cooperative IP Transit), multiple ISPs cooperate to jointly purchase IP (Internet Protocol) transit in bulk. The aggregate transit costs decrease due to the economies-of-scale effect of typical subadditive pricing as well as burstable billing: not all ISPs transit their peak traffic during the same period. To distribute the aggregate savings among the CIPT partners, we propose Shapley-value sharing of the CIPT transit costs. Using public data about IP traffic and transit prices, we quantitatively evaluate CIPT and show that significant savings can be achieved, both in relative and absolute terms. We also discuss the organizational embodiment, relationship with transit providers, traffic confidentiality, and other aspects of CIPT.
- Alberto Mozo, José Luis Lopéz-Presa, Antonio Fernández Anta (April 2011)
B-Neck: a distributed and quiescent max-min fair algorithm [PDF ]
The problem of fairly distributing a network capacity among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its objective is to maximize its as- signed transmission rate (i.e., its throughput). Since the links of the network have limited bandwidth, some form of criterion has to be defined to fairly distribute them among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s′ to end up with a rate λs′< λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control packets being continuously transmitted to recompute the max-min fair rates when needed. In this paper we propose B-Neck, a max-min fair distributed algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max-min rates have been computed B-Neck stops generating network traffic. As far as we know, B-Neck is the first max-min fair distributed algorithm that does not require a continuous injection of control traffic to compute the rates. When changes occur, affected sessions are asynchronously informed of their new rate (i.e., sessions do not need to poll the network for changes). The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
- Antonio Fernández Anta, Miguel A. Mosteiro, Jorge Ramón Muñoz (March 2011)
Unbounded Contention Resolution in Multiple-Access Channels [PDF ]
A frequent problem in settings where a unique resource must be shared among users is how to resolve the contention that arises when all of them must use it, but the resource allows only for one user each time. The application of efficient solutions for this problem spans a myriad of settings such as radio communication networks or databases. For the case where the number of users is unknown but fixed, recent work has yielded fruitful results for local area networks and radio networks, although either the solution is suboptimal or a (possibly loose) upper bound on the number of users needs to be known. In this paper, we present the first (two) protocols for contention resolution in radio networks that are asymptotically optimal (with high probability), work without collision detection, and do not require information about the number of contenders. The protocols are evaluated and contrasted with the previous work by extensive simulations. These show that the complexity bounds obtained by the analysis are rather tight, and that the two protocols proposed have small and predictable complexity for all system sizes (unlike previous proposals).
Volumen 2010 
- Alex Bikfalvi, Nazanin Magharei, Reza Rejaie, Jaime García-Reinoso (February 2010)
On the Design of Scalable Peer-to-Peer Video Caching [PDF ]
Peer-to-Peer(P2P)video caching is a promising approach to accommodate asynchronous requests from cached content at individual peers. However, coherently managing a distributed, heterogeneous, dynamic and potentially large scale cache space is a challenging task. In particular, a key challenge is to effectively control the number of cached copies for popular streams in order to accommodate their concurrent requests with minimum thrashing in the cached content. A few prior studies on P2P video caching rely on the global cache state in order to achieve this goal and, therefore, exhibit limited scalability. This paper examines key issues in the design of a scalable P2P video caching mechanism that can effectively control the number of cached copies for popular streams by only leveraging the local information at each peer.We argue that the local notion of popularity can serve a sane effective measure to perform cache replacement at individual peers. We sketch two straw-man P2P video caching techniques that rely on the trends in the popularity of individual streams to control the required number of copies in a reactive or proactive fashion. Using simulation, we examine the performance of the proposed mechanisms along with the distributed(and uncoordinated) version of LRU and LFU mechanisms that only use local work load at each peer.Our results show that distributed and uncoordinated P2P video caching generally exhibit good performance across a wide range of scenarios.
Volumen 2009 
- Arturo Azcorra, Isidro Laso-Ballesteros, Petros Daras, Carmen Guerrero (January 2009)
Research on Future Media Internet [PDF ]
Copyright and all rights of the documents on this site are retained by authors or by other copyright holders. The documents may not be reposted without the explicit permission of the copyright holder.