Randomized benchmarking (RB) is a widely used strategy to assess the quality of available quantum gates in a computational context. The quality is usually expressed as an effective depolarizing error per step. RB involves applying random sequences of gates to an initial state and making a final measurement to determine the probability of an error. Current implementations of RB estimate this probability by repeating each randomly chosen sequence many times. Here we investigate the advantages of fully randomized benchmarking, where each randomly chosen sequence is run only once.
We find that full randomization offers several advantages including smaller confidence intervals on the inferred step error, maximum likelihood analysis without heuristics, straightforward optimization of the sequence lengths, and ability to model and measure behaviors beyond the basic randomized benchmarking model usually assumed, such as gate-position-dependent errors or certain time-dependent errors. Furthermore, we provide concrete protocols to minimize the uncertainty of the estimated parameters given a bound on the time for the complete experiment, and we implement a flexible maximum likelihood analysis.
We consider several past experiments and determine the potential for improvements with optimized full randomization. We also experimentally observe such improvements in randomized benchmarking experiments in an ion trap at NIST. For an experiment with uniform lengths and repeated sequences the step error was $2.42 +.30 -.22 \times 10^{-5}$, and for an optimized fully randomized experiment the step error was $2.57 +.07 -.06 \times 10^{-5}$. We find a substantial decrease in the uncertainty of the step error as a result of optimized fully randomized benchmarking.