Async Patterns
xbbg v1 is async-first. Every data function has an async counterpart: bdp() → abdp(), bdh() → abdh(), bdib() → abdib(), and so on. The async versions are the canonical implementation — the sync functions are thin wrappers that delegate via _run_sync().
Architecture
Section titled “Architecture”Under the hood, xbbg runs a Rust async engine with pre-warmed worker pools. Each request worker holds an independent Bloomberg session, and concurrent calls are dispatched round-robin:
┌─────────────────────────────────────────────────┐│ xbbg Engine ││ ││ ┌──────────────────────┐ ┌─────────────────┐ ││ │ Request Worker Pool │ │ Subscription │ ││ │ (request_pool_size) │ │ Session Pool │ ││ │ │ │ (sub_pool_size) │ ││ │ Worker 1 ──session │ │ │ ││ │ Worker 2 ──session │ │ Session 1 │ ││ │ ... │ │ ... │ ││ └──────────────────────┘ └─────────────────┘ ││ │ round-robin │ isolated ││ ▼ ▼ ││ bdp/bdh/bds/bdib subscribe/stream │└─────────────────────────────────────────────────┘With request_pool_size=2 (the default), two Bloomberg requests can run in parallel at the session level. Raising the pool size allows more simultaneous requests.
Sync/async function mapping
Section titled “Sync/async function mapping”| Sync | Async |
|---|---|
bdp() | abdp() |
bdh() | abdh() |
bds() | abds() |
bdib() | abdib() |
bdtick() | abdtick() |
request() | arequest() |
All async functions accept identical parameters to their sync equivalents and return the same DataFrame types.
Basic usage in scripts
Section titled “Basic usage in scripts”In a plain script or application (no existing event loop), use asyncio.run():
import asynciofrom xbbg import blp
async def get_data(): df = await blp.abdp(tickers='AAPL US Equity', flds=['PX_LAST', 'VOLUME']) return df
data = asyncio.run(get_data())Concurrent requests
Section titled “Concurrent requests”asyncio.gather() runs multiple Bloomberg requests in parallel on a single thread. Requests are dispatched across the worker pool simultaneously rather than sequentially:
import asynciofrom xbbg import blp
async def get_multiple(): results = await asyncio.gather( blp.abdp(tickers='AAPL US Equity', flds=['PX_LAST']), blp.abdp(tickers='MSFT US Equity', flds=['PX_LAST']), blp.abdh(tickers='GOOGL US Equity', flds=['PX_LAST'], start_date='2024-01-01'), ) return results # list of DataFrames, one per awaitable
results = asyncio.run(get_multiple())The three requests above are dispatched at once. With the default request_pool_size=2, two run immediately and the third waits for a free worker. Increase the pool size to raise the concurrency ceiling:
from xbbg import configure
configure(request_pool_size=4)Jupyter notebooks
Section titled “Jupyter notebooks”Jupyter and VS Code Interactive run an IPykernel event loop. For one-shot request/response APIs, the sync wrappers keep the familiar Bloomberg syntax working by using a notebook-only background event-loop bridge:
from xbbg import blp
# Sync one-shot calls work in notebook cellsdf = blp.bdp(tickers='AAPL US Equity', flds=['PX_LAST', 'VOLUME'])hist = blp.bdh(tickers='AAPL US Equity', flds=['PX_LAST'], start_date='2024-01-01')The bridge applies to bdp, bdh, bds, bdib, bdtick, and request. It preserves scoped xbbg.Engine(...) context across the background thread and propagates exceptions back to the cell.
If you are already writing async notebook code, keep using top-level await:
from xbbg import blp
df = await blp.abdp(tickers='AAPL US Equity', flds=['PX_LAST', 'VOLUME'])GIL release
Section titled “GIL release”The Rust engine releases the Python GIL while waiting for Bloomberg I/O. This means Python threads can execute concurrently during a Bloomberg request — CPU-bound Python work is not blocked while requests are in flight. In practice:
- A
asyncio.gather()over multiple Bloomberg calls does not serialize at the GIL boundary. - Threads running independent Python work alongside Bloomberg requests are not stalled waiting for network responses.
This is a property of the Rust/PyO3 integration, not Python’s asyncio scheduler.
Error handling
Section titled “Error handling”All xbbg exceptions inherit from BlpError, which lets you catch any Bloomberg-related failure with a single clause. The hierarchy:
| Exception | When raised |
|---|---|
BlpSessionError | Session start, connect, or service open failure |
BlpRequestError | Request-level error returned by Bloomberg |
BlpSecurityError | Invalid or inaccessible security identifier |
BlpFieldError | Invalid or inaccessible field name |
BlpTimeoutError | Request timed out |
BlpValidationError | Field validation failure (strict mode) |
BlpInternalError | Internal engine error |
Single request
Section titled “Single request”import asynciofrom xbbg import blpfrom xbbg.exceptions import BlpSecurityError, BlpError
async def safe_fetch(ticker: str): try: return await blp.abdp(tickers=ticker, flds=['PX_LAST']) except BlpSecurityError as e: print(f"Unknown security: {ticker} — {e}") return None except BlpError as e: print(f"Bloomberg error: {e}") raise
asyncio.run(safe_fetch('INVALID US Equity'))Concurrent requests with partial failures
Section titled “Concurrent requests with partial failures”By default, asyncio.gather() cancels all pending awaitables and re-raises the first exception. Pass return_exceptions=True to collect results and errors together instead:
import asynciofrom xbbg import blpfrom xbbg.exceptions import BlpError
async def fetch_all(tickers: list[str]): results = await asyncio.gather( *[blp.abdp(tickers=t, flds=['PX_LAST']) for t in tickers], return_exceptions=True, ) for ticker, result in zip(tickers, results): if isinstance(result, BlpError): print(f"Failed: {ticker} — {result}") else: print(f"OK: {ticker}, rows={len(result)}")
asyncio.run(fetch_all(['AAPL US Equity', 'INVALID', 'MSFT US Equity']))Best practices
Section titled “Best practices”Batch tickers in a single call instead of many parallel calls.
abdp() and abdh() accept a list of tickers. A single call with ['AAPL US Equity', 'MSFT US Equity', 'GOOGL US Equity'] is more efficient than three parallel calls — it sends one Bloomberg request and Bloomberg returns results for all tickers together.
# Preferred: one request, many tickersdf = await blp.abdp( tickers=['AAPL US Equity', 'MSFT US Equity', 'GOOGL US Equity'], flds=['PX_LAST', 'VOLUME'],)
# Use gather when requests are genuinely different (different fields, dates, services)results = await asyncio.gather( blp.abdp(tickers=['AAPL US Equity', 'MSFT US Equity'], flds=['PX_LAST']), blp.abdh(tickers='AAPL US Equity', flds=['PX_LAST'], start_date='2024-01-01'),)Use async when integrating with async frameworks.
If you are building a FastAPI service, ASGI app, async task queue, or other async application, prefer the a-prefixed functions — they participate in the event loop without blocking it. Sync wrappers still raise in these generic async contexts and direct you to await abdp(...), await abdh(...), and the other async APIs.
Sync functions work in scripts and notebooks.
If your script is sequential and you have no async requirements, the sync functions (bdp, bdh, bdib, etc.) are fully supported. In IPykernel notebooks and VS Code Interactive, one-shot sync calls use a background loop bridge; outside notebooks, sync wrappers use asyncio.run().
Match pool size to workload.
The default request_pool_size=2 is appropriate for most workloads. If you are issuing many concurrent gather() calls with large fan-outs, raise it — but Bloomberg API servers also have per-connection limits, so test before committing to a large value.
For the full API reference, see Bloomberg Data API.