uniqc.task.result_types module#

Unified result types for all quantum backends.

This module defines a standardized result format that all platform adapters must convert their outputs to. This ensures consistent handling of results regardless of which quantum cloud platform was used.

The UnifiedResult dataclass provides: - Measurement counts and probabilities in a consistent format - Platform identification and task metadata - Optional advanced results (expectation values, statevector) - Raw platform result for debugging

Usage:

# Create from counts
result = UnifiedResult.from_counts(
    counts={"00": 512, "11": 488},
    platform="quafu",
    task_id="abc123"
)

# Create from probabilities
result = UnifiedResult.from_probabilities(
    probabilities={"00": 0.512, "11": 0.488},
    shots=1000,
    platform="originq",
    task_id="xyz789"
)
class uniqc.task.result_types.UnifiedResult(counts, probabilities, shots, platform, task_id, backend_name=None, execution_time=None, raw_result=None, error_message=None)[source]#

Bases: object

Unified quantum execution result format.

All platform adapters must normalize their output to this format, ensuring consistent result handling across different quantum backends.

Variables:
  • counts (Dict[str, int]) – Measurement counts as dict mapping bitstrings to counts. Example: {“00”: 512, “11”: 488}

  • probabilities (Dict[str, float]) – Measurement probabilities as dict mapping bitstrings to probs. Example: {“00”: 0.512, “11”: 0.488}

  • shots (int) – Total number of shots executed.

  • platform (str) – Platform identifier (‘originq’, ‘quafu’, ‘ibm’, ‘dummy’).

  • task_id (str) – Unique task identifier from the platform.

  • backend_name (str | None) – Name of the quantum backend/hardware used (optional).

  • execution_time (float | None) – Execution time in seconds (optional).

  • raw_result (Any) – Original platform result object for debugging (optional).

  • error_message (str | None) – Error message if execution failed (optional).

Parameters:

Example

>>> result = UnifiedResult.from_counts(
...     counts={"00": 512, "11": 488},
...     platform="quafu",
...     task_id="task-123"
... )
>>> print(result.probabilities)
{'00': 0.512, '11': 0.488}
backend_name: str | None = None#
counts: Dict[str, int]#
error_message: str | None = None#
execution_time: float | None = None#
classmethod from_counts(counts, platform, task_id, **kwargs)[source]#

Create UnifiedResult from measurement counts.

Probabilities are automatically computed from counts.

Parameters:
  • counts (Dict[str, int]) – Dict mapping bitstrings to measurement counts.

  • platform (str) – Platform identifier.

  • task_id (str) – Task identifier.

  • **kwargs (Any) – Additional attributes (backend_name, execution_time, etc.).

Returns:

UnifiedResult instance with computed probabilities.

Return type:

UnifiedResult

Example

>>> result = UnifiedResult.from_counts(
...     {"00": 512, "11": 488}, "quafu", "task-1"
... )
classmethod from_probabilities(probabilities, shots, platform, task_id, **kwargs)[source]#

Create UnifiedResult from probability distribution.

Counts are computed by multiplying probabilities by shots count.

Parameters:
  • probabilities (Dict[str, float]) – Dict mapping bitstrings to probabilities.

  • shots (int) – Number of shots used.

  • platform (str) – Platform identifier.

  • task_id (str) – Task identifier.

  • **kwargs (Any) – Additional attributes.

Returns:

UnifiedResult instance with computed counts.

Return type:

UnifiedResult

Example

>>> result = UnifiedResult.from_probabilities(
...     {"00": 0.5, "11": 0.5}, 1000, "originq", "task-2"
... )
get_expectation(observable='Z')[source]#

Compute expectation value for a simple observable.

Currently only supports single-qubit Z expectation value computed from the first qubit’s measurement results.

Parameters:

observable (str) – Observable type (currently only ‘Z’ supported).

Returns:

Expectation value in range [-1, 1].

Return type:

float

Note

This is a simplified implementation. For complex observables, use uniqc.analyzer module.

platform: str#
probabilities: Dict[str, float]#
raw_result: Any = None#
shots: int#
task_id: str#