PolicyFlux Documentation¶
Computational framework for legislative process modeling¶
PolicyFlux provides a structured Python environment for simulating voting dynamics, institutional behavior, and policy outcomes. It supports deterministic and stochastic execution modes with configurable influence layers and actor models.
from policyflux import build_engine, IntegrationConfig, LayerConfig
config = IntegrationConfig(
num_actors=50, policy_dim=2, iterations=100, seed=12345,
layer_config=LayerConfig(
include_ideal_point=True,
include_public_opinion=True,
public_support=0.60,
),
)
engine = build_engine(config)
engine.run()
print(f"Pass rate: {engine.pass_rate:.1%}")
Overview¶
-
Composable configuration
Construct experiments using
IntegrationConfig, institutional presets, and flat parameter overrides. Modify scenario parameters without altering simulation internals. -
Deterministic and stochastic engines
Execute single deterministic runs or parallel Monte Carlo batches. Compare outcomes across institutional configurations using built-in summary metrics.
-
Layered influence model
Represent public opinion, lobbying, media pressure, party discipline, and agenda control as independent, composable influence layers with configurable aggregation.
-
Reference documentation
Complete coverage from introductory guides through auto-generated API reference. All public interfaces are documented with parameter descriptions and usage context.
Navigation¶
Documentation scope¶
- User guide: configuration, layers, engines, presets, scenarios
- Architecture: module map, runtime flow, design principles
- API reference: auto-generated from source via MkDocStrings
- Development: testing, quality assurance, release operations
- Institutional presets: presidential and parliamentary systems
- Execution engines: Monte Carlo and deterministic modes
Standard workflow¶
1. Configure IntegrationConfig — from presets, flat dicts, or direct construction
2. Build build_engine(config)
3. Execute engine.run()
4. Analyze engine.pass_rate, accepted_bills, rejected_bills
Choose your starting path¶
| If you want to... | Start here | Then continue with... |
|---|---|---|
| run your first simulation quickly | Getting Started | User Guide: Concepts |
| tune simulation parameters precisely | Configuration | Layers, Engines |
| compare institutional systems | Presets | Scenarios |
| understand internal design | Architecture | API Reference |
What makes results reproducible¶
For comparative experiments, keep these inputs constant unless they are your intended independent variables:
- random seed (
seed), - number of actors (
num_actors), - dimensionality (
policy_dim), - layer aggregation strategy (
aggregation_strategy), - layer order (
layer_names) when explicitly configured.
When changing assumptions, modify one group of parameters at a time (for example only public_support), then compare output deltas.
Common first-week workflows¶
1) Baseline + one intervention¶
- Run a baseline configuration.
- Enable one layer (for example public opinion).
- Re-run with identical seed.
- Compare
pass_rateand accepted/rejected counts.
2) Institutional comparison¶
- Build two configs from system presets.
- Keep model scale and seed fixed.
- Run both engines.
- Compare aggregate outcomes.
3) Robustness check¶
- Select one configuration.
- Repeat runs across a seed range.
- Inspect variance of
pass_rate. - Use Monte Carlo backends for larger batches.
Where to find details¶
- Symbol-level API docs: API Reference
- Layer and runtime mechanics: Layers and Engines
- Contributor process and QA: Development
- Release and deployment workflow: Release, Docs Deployment