Home Archive Photos Slideshows Database


Current Replay Systems Not Up To Task of Insuring Accurate Calls

As Many as Twenty Percent of Technical Panel Jump Reviews May Be Wrong

by George S. Rossano

Range of Capabilities of Current Reply Systems, Compared to Actual Performance Needs

Property Current Systems Performance Needed
Frame Rate 30 Hz
60 Hz
> 240 Hz
Integration Time 1/30 sec
1/2000 sec
< 1/1000 sec
Scan Mode Interlaced
Spatial Resolution STD
Multi-Camera No Yes
Software Tools No Yes
Auto Operation No Desirable

Current capabilities are for least capable and most capable systems currently in use.

(17 October 2019)  Twenty percent is only an estimate.  It could be more, it might be less, but it is surely a significant number.  This is the sad fact of competition scoring.  Since IJS development began in earnest over 16 years ago in 2003, no one has ever conducted a rigorous, unbiased, quantitative study of the accuracy of Technical Panel calls.  Not once.  Not during development and not since.  To an engineer, IJS Technical Panel scoring is mostly a system of educated estimates, and not a proven absolute measurement system, with a quantitative validation, that it was meant to be.

For every level miscalled, a 0.2 to 2 point error is introduced into the scoring, depending on the element and the levels.  As another example, the difference between a clean quad Lutz, vs. an under-rotation call vs. a downgrade is 11.5 points vs. 8.8 points vs. 5.9 points in base value.  A single call in error by 2-3 points can potentially change results by several placements.  For this reason, calls must not be accurate some of the time, or most of the time, they must be accurate all of the time.

Instant Replay was implemented before IJS, so that judges could review elements before they entered their scores.  It was added to the Technical Panel, when IJS was introduced.  Originally it used standard definition video, and currently uses high definition video.  In major competitions, the judges and Technical Panel have replay.  In smaller competitions, only the Technical Panel has replay. (1)

While all types of elements and errors are affected by limitations in the replay system, in the current discussion we will mainly consider jump under-rotation and downgrade calls, as jumps make up the largest number of elements in a program, and those call errors have the largest point consequences.  They are also most stressing to the replay system capabilities.

Uncertainty in Jump Rotation Calls

Several factors determine the accuracy of under and downgrade calls.

Foremost, the panel must determine the takeoff direction of the jump, and the rotational orientation of the blade at the instant of landing with respect to the takeoff, with 90 degrees and 180 degrees short of full rotation the key demarcations for under-rotations and downgrades.  To do this accurately, they must also be able to identify the exact moments of take off and landing, as uncertainty in those times also leads to an added uncertainty in the true directions at the time of actual takeoff and landing.

Further, since the takeoff and landing directions are not viewed simultaneously there is a memory affect that also limits accuracy - instead of viewing the angle between two directions visible simultaneously, one is trying to determine the angle between two directions where the second direction is viewed nearly a second after the first.

Finally, the Technical Panel views apparent angles distorted by projection effects, and not true angles in 3D space.

It is proposed here, that the design goal for the combined uncertainty in the takeoff direction and the blade landing orientation should be such that the maximum error in identifying landings at 90 degrees and 180 degrees short of fully rotated is limited to a maximum of +3 degrees; or in other words, to limit the region of ambiguity in determining if a jump is under-rotated or downgraded to 6 degrees each near 90 degrees and 180 degrees.

For the error rate for calls, we estimate that one half of calls where the landing is in the region of ambiguity near 90 and 180 degrees are likely to be wrong.  For a 6 degree region of ambiguity out of 90 degrees that leads to an estimate of 3 percent of calls would be in error.  As we will discuss below, the range of ambiguity for current replay systems can be greater than 36 degrees out of 90, leading to an estimate of under and down calls in error of greater than 18%.

For double Axels and through all the quads, skaters spend 0.5 to 0.8 seconds in the air.  They reach a peak rotation of about 6 rotations a second, and are rotating up to 2-3 rotations a second when they land.  As they land, their vertical speed can be up to 13 feet per second.  These values will guide our discussion of replay technology requirements.  As a worst case we will assume the Technical Panel is 180 feet from the farthest corner of the ice surface where flips or Lutzes are frequently placed.  (At major competitions the Technical Panel is usually located several rows higher than the judges panel, set back several tens of feet from the boards.)

The Eye Is Useless

Putting aside the fact the eye does not provide replay or slow motion capability, to precisely discern in real time the instants of takeoff and landing, the direction of takeoff, and the orientation of the blade at the instant of landing, the eye is useless.

The two relevant characteristics of the eye, and any replay system, are the spatial resolution of the imaging system and the frame rate plus integration time of the system, which together determine the ability of the imaging system to discern spatial detail and temporal detail.

The eye has a typical angular resolution of about 2 arcmin.  This is determined by the sizes of the rods and cones in the eye (5-10 um), the focal length of the eye (25 mm), and the diameter of the pupil (about 2 mm in a brightly lit arena), the later determining the diffraction limited resolution of the eye.  The eye cannot resolve spatial detail smaller than about 1.1 inches at 180 ft.

The temporal resolution of the eye is determined by the eye's persistence of vision.  There are various estimates of this under different conditions.  A good typical number which we use here is 50 msec.  The motion of a moving object will be smeared over this 50 msec interval limiting the ability to discern rapid temporal details in the motion.

Now combine these two limitations of the eye with the characteristics of difficult jumps.

The eye cannot determine the instants of takeoff and landing to better than 50 msec each.  During 50 msec the skater will rotate up to nearly 60 degrees at the landing, ten times worse than the capability needed, and 2/3 of the 90 degree constraint for an under-rotation call.

Bottom line, for the eye in real time: when trying to identify unders and downs, the true blade orientation at the instant of landing may have been up to 30 degrees greater or smaller than the identified orientation.  This inevitably leads to a significant fraction of unaided eye calls being wrong.  It also explains in part why different judges on a panel come to different conclusions for how much a landing is not fully rotated.

HD Video Is Close To Useless

For the current video systems, the main advantage over the eye is that you get replay and slow motion, while the eye is real-time only.

Current replay systems use commercial HD cameras typically recording at 30 frames per second, giving a 33 msec frame time.  This is only a modest improvement over the eye at 50 msec. During 33 msec the skater will rotate as much as 36 degrees at the landing in one frame time, six times worse than the capability desired, and 40% of the 90 degree constraint for an under-rotation call, making a 1/4-under call a 90+18 degree measurement. (2)

For spatial resolution, several factors are at work.  There is the resolution of the video camera lens, the size of the detector pixels, and the resolution of the monitor used to display the video.  For the types of cameras used in current replay systems, the lens plus detector resolution is typically about 0.4 arcmin - which is 2.5 times better than the eye.  At 180 feet the spatial resolution is then 0.5 inches (about 1/2 the height of a blade).

Spatial resolution is also limited by the resolution of the display monitor.  The monitors typically used have effective operating resolutions of about three scan lines, or 0.3 - 0.4 inches at 180 feet, when the camera camera is zoomed in and the skater mostly fills the frame (not always the case - and occasionally not perfectly in focus either).  If not fully zoomed in, the resolution can be 2-3 times worse, in that case making the actual spatial resolution typical to the eye, and no better. (3)

An additional issue for the current replay system is the different video formats the videographer might provide.  The above assumes 1080i video, but if 720i is provided the spatial resolution is 50 percent worse.

The video feed provided is often interlaced video.  This means each frame at 30 frames per second consists of even and odd fields.  During the 1/60 second between fields the skater is moving, as much as 4 inches.  When displayed on a progressive scan monitor (generally the standard) the combined fields show distinct jagged shifts in the image for every even and odd scan line pair due to the skater motion between fields.  This makes it impossible to determine the horizontal spatial position of anything to better than about 4 inches.

The time delay between even and odd fields is also a problem for fast moving hands and feet during spins, where a skater could be rotating as many as 8-10 rotations a second.  When counting the number of rotations in position, there can be up to a 1/4 rotation error in deciding when a position is achieved and exited.

At 30 frames per second, the uncertainty in the time at which the blade contacts the ice at the landing of a jump is up to 17 msec due to the frame rate. Due to the spatial resolution, there is also a 0.5 inch uncertainty in distance from the ice, which corresponds to another 3 msec of uncertainty, for a total of 20 msec.  During this time the rotational motion of the skater can be up to 22 degrees.  Add that to the 36 degrees due to the frame time smearing noted above, and a total error of 58 degrees is possible, as bad as the unaided eye. (4)

Bottom line: while the video replay system can often be perhaps two times more precise than the real-time eye, in many situation during a competition it is not significantly better that the real-time eye, and even at its best is inadequate by a factor of over six to insure accurate calls.  For the most part, what the video system mainly provides is the ability to replay elements; not a clearer nor a significantly more detailed view of the elements.

Bottom line, the current replay systems are under engineered for the current scoring system, and have inadequate performance to determine the positions and orientations of the skaters to the accuracy needed.

Slow Motion Replay Helps Some

The current replay systems allow replay in slow and super slow motion.  Slow motion makes it easier to discern the moments of takeoff and landing in the jump, and follow the motion of the blade.  It does not, however, eliminate the image smearing during the frame time and due to interlace described above, and so does not eliminate the resulting uncertainty in blade position.  Nor does slow motion eliminate the uncertainty in the time of the takeoff and landing (which usually occur between two sequential frames) due to the finite frame rate.

Slow motion and super slow motion replay also make the image smearing and jaggies even more distinctive and distracting than playback at normal speed.  The utility of slow motion replay would be enhanced by reducing image smearing and eliminating jaggies in the replay system.

Having Only One Camera is an Impediment

The current replay systems use only one camera.  This camera is usually located near the end of the judges panel, though at local competitions it may well be placed anywhere around the rink, at distances from the far corners of more than 200 ft.  It is not uncommon that the aspect angle from the camera to the skater makes it impossible to discern the position of the skater the Technical Panel needs to see to make an accurate call.  For example, when the skater is moving transverse to the line of sight from the camera, changes of edge are nearly impossible to discern.

For determining rotations, the apparent angle between any two lines on a horizontal surface (e.g., takeoff direction and landing orientation) is altered by the amount of foreshortening caused by the camera's elevation. The only way to guarantee accurate measurement of a blade's rotation angle in all situations using a single camera is to place the camera on the rotation axis of the skater, i.e., overhead.

In situations where the aspect angle of the view is adverse (and in other situations), the practice is to "go with the skater" or "give the skater the benefit of the doubt."  But any undeserved points gifted one skater in such situations is a detriment to all other skaters in the event.  Again the fairest competition is one where there are 100% accurate calls for all skaters for all calls.  To avoid adverse affects due to aspect angle, additional cameras, judiciously placed, are need.

Next Generation Replay

To insure accurate calls, particularly for triples and quads, the current replay systems require a major upgrade in capability, and the accuracy of Technical Panels needs validation with quantitative testing.

What capabilities should this next generation system have?

Frame Rate: To meet our assumed measurement requirement of 6 degrees, a 180 frame per second camera would be adequate.  Taking into account other factors that influence the precision of blade rotation measurements, 240 frames per second would be preferable.  At 480 frames per second the region of ambiguity in deciding if a jump is under-rotated or downgraded would be reduced to 3 degrees each near 90 degrees and 180 degrees.  While rates up to 480 frames per second are appealing, the availability of cameras with that speed and also having the necessary spatial resolution (at a practical cost) is limited.

Shutter:  At a minimum there must be a progressive scan shutter (also called a rolling shutter).  A global shutter is preferable, in which case all pixels are integrated and read out at the same times, eliminating distortions and artifacts in the images due to capturing different pixels at different times.

Integration time:  For most video cameras, the integration time for each pixel is the frame time.  A camera where the integration time can be set less than the frame time is preferable. For example, running a camera at 240 frames per second (4.2 msec frame time) but with an integration time of, say, 1 msec meets the rotational precision requirement while also reducing the significant motion blur of fast moving hands and feet by an additional factor of four.

Video Format:  While a camera and monitor combination that delivered ideal 1080P resolution when viewing the monitor (something the current replay system does not do) would provide adequate spatial resolution (1/8 in. - roughly the width of a skate blade), 4K resolution is preferable; the reason being that the ideal theoretical resolution of a single pixel of a 2K image is not the actual resolution you get viewing the monitor, due to the various factors that degrade the resolution in an end-to-end system.

Multiple Cameras: A minimum of two camera views are needed to eliminate the limitations where the aspect angle of a single camera is unfavorable for an accurate call.  Multiple cameras also would allow additional measurement capabilities beyond just calling the elements.

Auto Focus, Auto Zoom, Auto Tracking:  These capabilities might take more than one generation to implement, but a completely camera-operatorless system would have huge cost savings in use.  Once a system is purchased, the cameramen are, in fact, the most expensive part of operating the system.  As the system design goal is to have multiple cameras, eliminating cameramen multiplies the cost savings.

Software Tools:  The current replay systems have no software measurement tools.  Development of measurement tools that fully exploit the increased capabilities of the hardware would significantly improve the accuracy of calls.  This is an enhancement that even the current system would benefit from.

Not long ago I was told by an ISU official who was involved with IJS from it's inception, that ISU past president Ottavio Cinquanta's vision from the beginning was that the Technical Panel was a temporary step until technology was developed to replace the panel with cameras and computers.  As far as I have been able to determine, however, the ISU has never had a program in place to develop the hardware and software tools to achieve that vision.

Completely replacing the Technical Panel with technology is not going to happen overnight, or maybe ever, considering the decision making complexity for some level requirements for certain elements, as well as the non-quantifiable requirements for other level requirements.  (How would software decide, for example, if something in difficult?)  Nevertheless, some basics are already within reach with the technology available today, with only a modest effort.

For example, determining under-rotations, downgrades, edge calls, identification of basic spin positions, and determining number of rotations in a spin position using a replay system are all achievable with a modest development effort.  If these most common decisions were made by the replay hardware with little human discussion (and without the time killing repetitive playing of the same clip over and over and over) the time spent in element reviews potentially could be cut in half.  The economic implications of this alone is worth the development effort.


(1)  Internationally, and domestically in the U.S., replay systems with several different designs variations are in use.  These have varying capabilities.  Non-qualifying competitions in the U.S. rely on the competition videographer to provide the video feed to the replay hardware.  The details and quality of these feeds vary.  We assume in this article, worst case examples for the video feed.

(2)  High end video cameras allow for integration times shorter than the frame time.  For systems with such cameras, the accuracy of identifying the directions of takeoff and landing can be several times better than described here, depending on the integration time, but still falls short of needed performance by factors of two to four.

(3)  If the camera operator maintains a constant image size for the skater when operating the camera zoom, the spatial resolution is independent of distance to the skater.  For example, if a constant image size of 10 ft vertical is maintained, then the spatial scale of a 1080 HD image is 0.11 in. per scan line (0.17 in. per 720i scan line).

(4)  For video feeds with short integrations times (e.g., as short as 1 msec), motion smearing during the integration time becomes a less important source of error, and the time between images becomes the dominant source of error - still roughly four times worse than the design goal.