Held biennially, this five-day event carries on the tradition of Stochastic Networks conferences initiated in 1987. The conference brings together mathematicians and applied researchers that share an interest in stochastic network models. Explore past conferences here. All in-person participants are welcome to present a poster and should indicate their desire to do so when registering.
Dates: June 15–19, 2026
Location: Gleacher Center
Lodging: Book early! We have secured a limited number of discounted rooms at The Intercontinental Chicago Magnificent Mile and the Sheraton Grand Chicago Riverwalk. Once those rooms have been sold, we encourage out-of-town attendees to choose from the many other hotels available in the downtown Chicago area. Use the following links to book in one of our discounted blocks.
The Intercontinental Chicago Magnificent Mile
The Sheraton Grand Chicago Riverwalk (Please call if you would like to book dates outside of June 14-19.)
Registration: Registration is open from now until May 15, 2026, with late registration fees going into effect March 1, 2026. Your registration gives you access to all talks, poster sessions, lunches, and the banquet session. Registered attendees will have the option to pay for one additional guest to attend the banquet dinner. We hope to see you all in-person, but for those who cannot make it to Chicago, there is a view-only virtual option. No virtual presentations or posters will be allowed. Registration is non-refundable. Please see below for this year’s rates.
| Student | $75 |
| Non-Student | $200 |
| Student Virtual | $50 |
| Non-Student Virtual | $100 |
| Guest Banquet | $65 |
Scientific Program Committee:
- Mor Armony
- Francois Baccelli
- Mor Harchol-Balter
- Jim Dai
- Michel Mandjes
- Kavita Ramanan
- Rhonda Righter
- Rajesh Sundaresan
- Peter Taylor
- Neil Walton
- Ruth Williams
- Jiheng Zhang
For questions, contact the local organizing committee: Amy R. Ward (Chair), René Caldentey, John Birge, or Baris Ata.
We gratefully acknowledge outside funding from the Applied Probability Trust (see award program here). We are also very happy to be co-sponsored by IMS.
Speakers
Speaker: Opher Baron, Professor, University of Toronto
Title: Machine Learning, Casual Queueing, and SimLQ for Data Driven Simulation
Abstract: The objective of this talk is to expose researchers to the vast possibilities of using modern machinery and data for implementing effective management analytics for queueing processes. Such process are ubiquitous in modern economies, e.g., customers waiting to service, inventory waiting for processing/transportation, payments and invoices waiting to be generated/cleared, computing tasks waiting for resources.
I will discuss recent developments in queueing analysis based on several papers and our startup. We will first define management analytics along descriptive, predictive, comparative, i.e., comparing performance indicators under different interventions, and prescriptive analytics dimensions. We then shortly discuss ML solution for a G/G/1 based upon [1] and its extension to G(t)/G/1 based on [2].
Our main focus would be on structural causal queueing models (SCQM), based upon [3]. In this paper we suggest a data-driven representation of system building blocks to create a non-queueing simulator without prior knowledge of the system. We show that this approach is effective in comparative analytics, which, for simplicity, we demonstrate on analyzing expected waits for an M/M/1 with speed-ups.
The SCQM first requires to successfully refine the parent sets of queueing variables from data using, which we do using an off-the-shelf algorithm (even under a moderate sample size). We then use machine learning to estimate the causal structure in this queue, e.g., the Lindley's Recursion and use the G-computation to derive inference results of counterfactual interventions. We compare the performance of estimates obtained by a traditional closed form Queueing Theoretical (QT) analysis (that uses data driven estimates for the primitives of the queue) with SCQM based estimators. We find that the errors of the SCQM that assumes no knowledge of the system's dynamic and its features and these of the QT (which requires this knowledge) are comparable.
Our results suggest that SCQM would be effective for practical setting- where even experts QT analysis cannot provide closed-form results. We will finish with a demo of SiMLQ, see WWW.SiMLQ.COM. SiMLQ software uses Machine Learning to automate the visualization, Simulation, and optimization of Queueing processes. SiMLQ automatically constructs data-driven simulation models from event-log data collected by common information systems and enables users to improve processes resource management, increase efficiency, reduce cost, and manage risks. SiMLQ- from data to action!
References
- O Baron, D Krass, A Senderovich, E Sherzer, Supervised ML for Solving the GI/GI/1 Queue, INFORMS Journal on Computing 36 (3), 766-786 DOI: 10.1287/ijoc.2022.0263 Read Article
- Sherzer E., Baron O., Krass D. Reshef, H., (2025) Approximating G(t)/GI/1 queues with deep learning, European Journal of Operations Research, Volume 322, Issue 3, 1 May 2025, Pages 889-90) Read Article
- Baron, O., Krass, D., van der Laan, M., Senderovich, A,. Xu Z. Queueing Causal Models: Comparative Analytics in Service Systems. Forthcoming M&SOM) Read Article
Speaker: Jose Blanchet, Professor, Stanford University
Title: TBD
Abstract: TBD
Speaker: Anton Braverman, Associate Professor, Northwestern University
Title: On the Accuracy of Diffusion Approximations for Queueing Models with General Primitives
Abstract: Despite their widespread use, the accuracy of diffusion approximations for queueing systems with general primitives remains poorly understood. This talk explores this gap. A unifying message emerges: diffusion error decomposes naturally into interior and boundary terms. Interior terms are strikingly universal—they can be controlled using only low-order moments of the system primitives. Boundary terms, by contrast, are fundamentally more delicate: while crude bounds are relatively easy to obtain, achieving sharp (order-optimal) bounds requires deeper, model-specific insight.
The analysis hinges on extending the generator approach of Stein’s method beyond continuous-time Markov chains to piecewise-deterministic Markov processes (PDMPs). In this setting, the infinitesimal generator alone no longer suffices to characterize the stationary distribution; instead, we work with the basic adjoint relationship (BAR), which provides the appropriate stationary balance equation for PDMPs.
Speaker: Xinyun Chen, Associate Professor, The Chinese University of Hong Kong
Title: Learning When to Surge: Online Queue-based Pricing for Delay-sensitive Customers
Abstract: Motivated by applications in sharing economy and online marketplaces, we introduce an online learning method to optimize surge pricing strategies for service systems. Specifically, we propose a non-parametric framework that does not rely on any assumptions about the function form of a customer’s sensitivity to price and delay. Both our algorithmic design and theoretical analysis arise from a detailed examination of the underlying queueing structure. We prove that our online learning algorithm achieves an improved regret order over standard multi-armed bandit algorithms and showcase its efficiency through a set of numerical examples.
Speaker: Jing Dong, Associate Professor, Columbia University
Title: Service-Induced Congestion in Memory-Constrained LLM Serving
Abstract: In large language model (LLM) serving, each request accumulates persistent memory during service as its key–value cache grows with every generated token. Under high concurrency, aggregate memory usage therefore increases endogenously over time: service itself creates future capacity pressure. This progressive resource consumption breaks the monotonicity assumptions underlying classical stability analyses and gives rise to a new form of service-induced congestion.
In this work, we develop a discrete-time dynamical model of memory-constrained inference that captures the interplay among admission, memory growth, and throughput. In a saturated-input regime, the system admits both eviction-free fixed points and periodic limit cycles. For homogeneous job sizes, we show that the eviction-free equilibrium is unstable under standard continuous batching and that the dynamics converge to a unique asymptotically stable worst-case limit cycle, where throughput losses can be as large as 50%. For heterogeneous job sizes, we establish a sharp stability dichotomy: under a large-prompt scaling, the eviction-free equilibrium is asymptotically stable if and only if the decoding lengths are coprime.
Our results characterize precisely when workload heterogeneity stabilizes the system and when it fails to do so. Together, these findings identify service-induced congestion as a structural instability mechanism in memory-constrained systems and provide conditions under which heterogeneity restores stability.
Speaker: Peter Glynn, Professor, Stanford University
Title: Modern Computational Methods for Markov Chains
Abstract: In many applications settings, computational methods are playing an increasingly important role in analysis and subsequent optimization. In this talk, we will discuss two recent directions in which progress has been made, both involving continuous state space Markov chains. In the first, we show how existence of smooth transition densities can be used algorithmically to significantly speed up Markov chain computations relative to conventional discretization methods. The second research direction concerns the use of deep learning as a vehicle for computing various Markov chain performance measures. In this approach, simulations are combined with a neural network loss function informed by the integral equation to be solved. This talk covers joint work with Jose Blanchet and Yanlin Qu.
Speaker: Xin Guo, Professor, UC Berkeley
Title: An Alpha-potential Game Framework for Stochastic Games
Abstract: Game theory has a long and distinguished history, with min–max (zero-sum) games extensively studied since the foundational work of John von Neumann and John Nash. The transition from zero-sum to general-sum games represents a fundamental escalation in both conceptual and computational complexity. Over the past decade, mean field game theory has emerged as a powerful framework, providing deep theoretical insights and computational tools for analyzing large-scale, non-zero-sum stochastic games. However, classical mean field games rely on player homogeneity and focus primarily on limiting behavior as the number of players tends to infinity.
In this talk, we introduce a new paradigm for dynamic N-player non-cooperative games: alpha-potential games. In this framework, the change in a player’s value function resulting from a unilateral deviation coincides with the change in an associated alpha-potential function, up to an error term α. This structure reduces the challenging problem of finding α-Nash equilibria in dynamic games to the minimization of the corresponding alpha-potential function. We further show that this minimization problem can be formulated as a conditional McKean–Vlasov control problem.
Within this framework, the parameter alpha captures essential structural features of the game, including admissible strategy classes, interaction intensity, and the degree of heterogeneity among players. Using a stochastic network game as an illustrating example, we present recent theoretical developments and highlight several open problems arising from this new game framework.
Speaker: Bruce Hajek, Endowed Chair, University of Illinois at Urbana-Champaign
Title: TBD
Abstract: TBD
Speaker: Eva Locherbach
Title: Mean Field Limits for Systems of Interacting Spiking Neurons
Abstract: I will give an overview of recent results about mean field limits for systems of interacting point processes modeling spiking (biological) neurons. The talk will start with a short introduction to the functioning of neurons and the modeling of their spiking activity by stochastic integrate and fire models and then focus on a particular class of models which are stochastic intensity based processes. Then we will discuss mean field limits and propagation of chaos results for large homogeneous systems of neurons and see how the limit is described by a McKean-Vlasov type equation driven by Poisson random measure. A second part of the talk is devoted to the study of systems with random synaptic weights in different scalings. We will see how this setting leads to conditional propagation of chaos and to the emergence of a source of common noise in the limit. Finally, I will also discuss the longtime behavior both of the finite and the limit system of neurons. In particular we will see how metastable behavior might arise in some situations.
The second part of the talk is based on joint work with Xavier Erny, Dasha Loukianova and Elisa Marini.
Speaker: Lewis Mitchell, Professor, Adelaide University
Title: Measuring, Mapping, and Modelling Social Information Flows
Abstract: Complex contagion of information is a ubiquitous feature observed in online social networks, and modelled using a variety of approaches. However, these models typically treat “information" as a binary state adopted by an individual node in such a network, and so do not capture any of the complex features of actual information transfer between individuals embedded in socio-technical systems. Here we discuss some of the features of actual information transfer in real online social networks, and how non-parametric entropy estimators can be used to effectively map social information flows in the online information environment. We also discuss a "quoter model” that attempts to capture some of the properties of actual online written behaviour. This quoter model exhibits a number of the canonical features of complex contagion in social networks, as well as some more realistic results that traditional models do not capture, such as that network density inhibits social information flow. We present a number of case studies on social information flow and using the quoter model, and explore how phenomena like echo chambers might arise as a consequence of social information flow.
Speaker: Eliza O' Reilly, Assistant Professor, Johns Hopkins University
Title: The Stochastic Geometry of Decision Tree Learning
Abstract: Random forests are a widely used class of prediction algorithms made of ensembles of randomized decision trees. The most commonly used algorithms use only one covariate of the input data to partition the data in a given node of a tree. Oblique random forests are variants where splits are allowed to depend on linear combinations of the covariates. In this talk, we will discuss a class of efficiently generated random tree and forest estimators called oblique Mondrian trees and forests, as the trees are generated by first selecting a set of features from linear combinations of the covariates and then running the stochastic Mondrian process that hierarchically partitions the data along these features. Our theoretical analysis illuminates when these estimators can adapt to dimension reduction models for which the output depends on a general low-dimensional relevant feature subspace and we quantify how robust the risk is with respect to error in the estimation of these relevant features. We will then discuss an approach for identifying the relevant feature subspace that uses a Mondrian forest to estimate the expected gradient outer product (EGOP). In addition, we introduce an iterative algorithm called Transformed Iterative Mondrian (TrIM) forest to improve the Mondrian forest estimator by incorporating data-adaptivity in the partitioning process via the EGOP.
Speaker: Sarath Yasodharan, Assistant Professor, IIT Bombay
Title: A Sanov-type Theorem for Marked Sparse Random Graph and its Applications
Abstract: We prove a Sanov-type large deviation principle for the component empirical measure of certain families of sparse random graphs whose vertices are marked with i.i.d. random variables. Specifically, we show that the rate function can be expressed in a fairly tractable form involving suitable relative entropies.
We illustrate two applications of this result:
- We quantify probabilities of rare events in stochastic networks on sparse random graphs.
- We characterize the annealed free energy density of a broad class of probabilistic graphical models.
Joint work with I-Hsun Chen and Kavita Ramanan.
Speaker: Yuan Zhong, Associate Professor, University of Chicago Booth
Title: Multilevel Picard Schemes for Solving High-Dimensional Dimensional Drift Control Problems with State Constraints
Abstract: Motivated by applications to queueing network controls, we develop a simulation scheme, the so-called multilevel Picard (MLP) approximation, for solving high-dimensional drift control problems whose states are constrained to stay within the nonnegative orthant, over a finite time horizon. We prove that under suitable conditions, the MLP approximation overcomes the curse of dimensionality in the following sense: To approximate the value function and its gradient evaluated at a given time and state to within epsilon mean-squared error, the computational complexity grows at most polynomially in the dimension d and 1/ε. To illustrate the effectiveness of the scheme, we carry out numerical experiments for a class of test problems related to the dynamic scheduling of parallel server systems in heavy traffic, and demonstrate that the scheme is computationally feasible up to dimension 20.
Speaker: Bert Zwart, Professor, Eindhoven University of Technology
Title: Data-driven Decision-making: The Impact of Rare Events and Heavy Tails
Abstract: Designing safety-critical systems—such as financial portfolios, self-driving cars, and power grids—requires minimizing costs (or maximizing returns) while ensuring that the probability of systemic failure remains extremely small, often on the order of 10-3 or less. These problems are naturally formulated as chance-constrained optimization, where an objective is optimized subject to probabilistic constraints on undesirable events.
Despite its conceptual appeal, solving chance-constrained optimization problems becomes computationally challenging when the allowed violation probability is very small. Moreover, existing approaches offer limited insight into qualitative questions, such as how system costs scale with increasing reliability. In practice, the scarcity of failure data further complicates the estimation of rare-event probabilities, and the resulting model uncertainty is often neglected.
This presentation explores the potential of the complementary strengths of large-deviations theory and distributionally robust optimization to resolve this challenge and is based on the preprints https://arxiv.org/abs/2407.11825 and https://arxiv.org/abs/2503.21421 and ongoing efforts to unify both frameworks.
A particular source of motivation is to identify more cost-effective mechanisms for procuring ancillary services in power grids with high renewable penetration. In my talk, I will also delve into this application area, covering parts of https://arxiv.org/abs/2411.11093, and [while writing this abstract] hope to reflect on lessons learned from the past, in particular on the roles of physicists, statisticians, and economists in the design of Dutch dike heights.
This talk is based on joint work with Jose Blanchet (Stanford), Jobke Janssen (CWI), Joost Jorritsma (Oxford), Jalal Kazempour (DTU Kopenhagen), Bart van Parys (CWI), Martijn Gosgens (UTwente), and Alessandro Zocca (VU University).
