December 2023, Bologna
Colocated with Middleware 2023
The DICG'23 workshop is co-located with ACM/IFIP Middleware 2023, which takes place on December 11th - 15th, 2023 in Bologna.
This workshop is focused on distributed infrastructures that enable human interactions and economic activity in general with a focus on the common good. Daily life is transitioning to digital infrastructures, including friendships, education, employment, health-care, finances, family connections, and more. These infrastructures can contribute to the common good enabling us to work together to improve the wellbeing of people in our society and the wider world.
Private ownership of infrastructures does not seem to solve the traditional problems of Tragedy of Commons: pollution (spam and bot network on social media), over-exhaustion of resources (net neutrality), and fairness (gig economy). Privatization of digital commons also introduces the potential for monopolistic abuse, such as: stifled innovation, price discriminations, and distorted market knowledge discovery. We aim to explore within this workshop viable alternatives to 'winner-takes-all' platform ecosystems. Failure of market mechanisms to address these issues suggest that such infrastructures could be treated as commons. We recognize the promising avenue of research build on Nobel laureate Ostroms idea that commons is the third way to organize complex human cooperation, beyond capitalist regulation or governmental regulations.
Scientific challenges include, but are not limited to: the Tragedy of the Commons in such shared-resource systems, fake identities with Sybil attacks, robot economy, trustworthiness in general, self-organizing machine learning, market infrastructures in cashless society, and governance issues in decentralized systems.
This workshop focuses on the tools, frameworks, and algorithms to support the common good in a distributed environment. Both theoretical work and experimental approaches are welcomed. Reproducibility, open source and public datasets are endorsed. Each submission must clearly contribute to the middleware community, to facilitate the development of applications by providing higher-level abstractions for better programmability, performance, scalability, and security.
The topics of interest include, but are not limited to:
Full papers can have a maximum length of 6 pages in the standard, 10pt ACM SIGPLAN format. The page limits include figures, tables, and references. All submitted papers will be judged through single-blind reviewing.
Please submit your manuscripts here.
All accepted papers will appear in a Middleware 2023 companion proceedings, which will be available in the ACM Digital Library prior to the workshop. At least one of the authors will have to register for the workshop and present the paper.
The accepted papers and their reviews are also available on OpenReview.
All times are specified in Central European Time (CET). Click here to see the start time of the event in your time zone.
Prof. Bryan Ford leads the Decentralized/Distributed Systems (DEDIS) lab at the Swiss Federal Institute of Technology in Lausanne (EPFL). He focuses broadly on building secure decentralized systems, including privacy and anonymous communication, systems security, and blockchain technology. Since earning his Ph.D. at MIT, Ford has held faculty positions at Yale University and EPFL.
In contrast to many online services based on client-server infrastructure, peer-to-peer systems are usually designed as open commons. This is partly because, by design, peer-to-peer systems replicate data on end-user devices and typically use open implementations, precluding access control. Open commons however lower incentives for end users to contribute the resources necessary to cover development and maintenance costs, resulting in chronic under-funding and few offerings of mature peer-to-peer alternatives. In this paper, we show how to design peer-to-peer systems as closed commons by making the replication of updates conditional to proven contributions, tracked by a blockchain or eventually-consistent ledger. We also present an economic model that incentivizes users to support both developers of the system and content producers. We finally identify factors that suggest our economic model might be cost-competitive with cloud-hosting for compatible applications.
The concept of the tragedy of the commons, originally rooted in economics, describes the depletion of shared resources due to self-interested actions by individuals. This work proposes a novel solution to address this economic challenge by leveraging tokens to capture its multidimensional nature. By utilising blockchain and DLTs, this decentralised approach aims to achieve a social optimum while promoting self-regulation. The paper presents a mathematical treatment of the tragedy of the commons, incorporating multi-dimensional tokens and exploring the divergence from the classic optimal solution, highlighting the potential of tokenisation in shaping a sustainable and efficient economy.
Despite the growth in the number of decentralized applications (DApps) supported by the Ethereum blockchain, we can observe the narrow scope of these DApps, concentrated within the fintech and games areas. A cause for the lack of range of DApps lies in the fees for transactions sent to backing smart contracts. While consistent steps have been made to overcome cost efficiency problems, introducing rollups as a secondary layer solution, intertwined accessibility and security drawbacks still persist. Measures addressing some of these issues like account abstraction were independently proposed. These solutions bring changes in transaction handling that often exceed the scope of smart contracts, where the core of DApp logic resides. Integrating such measures often requires the use of new frameworks and understanding the changes in the transaction flow, which can prove challenging to a DApp developer. A question is whether the current landscape of solutions proposed for increasing usability is capable of producing a consistent impact on DApp scope trends. In this position paper we try to answer this, raising also the matter of impact on DApp engineering.
A rollup is a network, implemented via smart contracts on a blockchain, that aims to scale that slow but general purpose blockchain. The rollup executes transactions and posts the resulting state root, along with the transaction data, to a blockchain they are built on. As a result, the state root of the rollup network is always recorded on the underlying blockchain. The underlying blockchain is used to derive the state of the rollup itself, meaning that the rollup state cannot be changed arbitrarily or would be easily detected (subject to how its state is updated and recorded on the underlying blockchain). In turn, the rollup inherits some security from its underlying blockchain --- but the rollup network itself is not immune to direct attacks. Some attacks are like other network-level attacks (e.g., denial-of-service attacks) while others are a result of the rollup's connection to its underlying blockchain (e.g., re-organization attacks). In this work, we collect a list of known attacks on rollups and illustrate their impact.
The continuous growth in data volume increases the interest in using peer-to-peer (P2P) systems not only to store static data (i.e., immutable data) but also to store and share mutable data -- data that are updated and modified by multiple users. Unfortunately, current P2P systems are mainly optimized to manage immutable data. Thus, each modification creates a new copy of the file, which leads to a high "useless'' network usage. Conflict-free Replicated Data Types (CRDTs) are specific data types built in a way that mutable data can be managed without the need for consensus-based concurrency control. A few studies have demonstrated the potential benefits of integrating CRDTs in the InterPlanetary File System (IPFS), an open-source widely used P2P content sharing system. However, they have not been implemented and evaluated in a real IPFS deployment. This paper tries to fill the gap between theory and practice and provides a quantitative measurement of the performance of CRDTs in IPFS. Accordingly, we introduce IM-CRDT, an implementation of CRDTs in IPFS that focuses on the simple data type (i.e., Set); and carry out extensive experiments to verify whether CRDTs can efficiently be utilized in IPFS to handle mutable data. Experiments on Grid'5000 show that IM-CRDT reduces the data transfer of an update by up to 99.96% and the convergence time by 54.6%-62.6%. More importantly, we find that IM-CRDT can sustain low convergence time under concurrent updates.
We investigate how well IPFS functions in real-world restrictive network environments. In a series of experiments, we utilize four vantage points, one of which lies behind the Great Firewall of China (GFW), to ascertain how various parts of the IPFS ecosystem perform in these settings. We test HTTP gateways and find that, although they are not systematically blocked, only about a third function in China, in comparison to Germany. Evaluating P2P functionality, we run experiments on data exchange between the four nodes. We find that the GFW has little measurable impact on these functionalities. The main inhibiting factor for P2P functionality remains network address translation (NAT). Lastly, to help NATed nodes spread their content, we propose and evaluate using public gateways as temporary replicators, but find only marginal gains.
Machine learning is becoming a key technology to make systems smarter and more powerful. Unfortunately, training large and capable ML models is resource-intensive and requires high operational skills. Serverless computing is an emerging paradigm for structuring applications to benefit from on-demand computing resources and achieve horizontal scalability while making resources easier to consume. As such, it is an ideal substrate for the resource-intensive and often ad-hoc task of training deep learning models and has a strong potential to democratize access to ML techniques. However, the design of serverless platforms makes deep learning training difficult to translate efficiently to this new world. Apart from the intrinsic communication overhead (serverless functions are stateless), serverless training is limited by the reduced access to GPUs, which is especially problematic for running deep learning workloads, known to be notoriously demanding. To address these limitations, we present KubeML, a purpose-built deep learning system for serverless computing. KubeML fully embraces GPU acceleration while reducing the inherent communication overhead of deep learning workloads to match the limited capabilities of the serverless paradigm. In our experiments, we are able to outperform TensorFlow for smaller local batches, reach a 3.98x faster time-to-accuracy in these cases, and maintain a 2.02x speedup for commonly benchmarked machine learning models like ResNet34.
General Co-chairs:
Organization Chairs:
Program Committee: