Takes no time, produces several GB? No it's quite possibie they are shooting with a RAW format or capturing uncompressed in an external recorder.
But you do need to know from them exactly what format the sources are in and how they will be delivered. What storage space you'll need to hold those sources. What speed your drives need to be to cope with uncompressed. Is it practical to do an uncompressed edit or is it better to go offline lower res and then conform to uncompressed. So you need clarity on what they mean uncompressed project?
Is it a complete uncompressed workflow or just an uncompressed export at the end. T E pat vet-training. It's a development from yesterday's nonsense about exporting in ProRes.
Now they want the film as uncompressed QT so they can convert it themselves. So it sounds like you've got an Apple fan boy requesting final delivery. It's a considered a loss-less "uncompressed" codec which is kind of an oxymoron. As to the file size, not sure how long your final vid is, but it's more than likely going to be bigger than the 4. I also sometimes supply blackmagic 8 or 10 bit uncompressed codec files as uncompressed sources if that's what the client wants. And as your coming from h.
Granted it would be even worse to try and stick to H. I've often had clients request "uncompressed", which we later found out they meant H. Their use of the word "uncompressed" meant don't resize it, they had no idea about actual compression and codecs and all that. See our detailed description of our benchmarking method and run the benchmark yourself: Video codec benchmarking done right Packed with features Mathematically lossless Unlike visually lossless codecs, MagicYUV is mathematically lossless, meaning that the decompressed output is bit-by-bit identical to the original input.
This means there is no generation loss. You can re-compress the same clip over and over as many times as you want without quality degradation. MagicYUV offers a variety of compressed formats to choose from with built-in high quality and fast conversion for ease of use.
Latest release: MagicYUV 2. New releases are available automatically and can be downloaded through the link you got during your original purchase. We offer MagicYUV at a low price to be accessible to everyone, but if it truly revolutionized the way you work, show your appreciation by entering any amount above the default your heart desires.
We very much appreciate it and will never forget! Enables realtime 4k editing on an average PC. On the strength of a little testing, this codec really is magic! This codec gives me goddamn awesome fps rates when recording with Afterburner games. This thing is a beast. MagicYUV Lossless Video Codec A high-performance, ultra-fast, mathematically lossless video codec for recording, archiving, post-production and editing at high resolutions.
There are '2vuy' files in the field with a version less than 2. If you want to support such files, see '2vuy'. Despite the name, this pixel format is not a simple byte-reversal of '2vuy. There are 'yuv2' files in the field with a version less than 2.
If you want to support such files, see 'yuv2' Backwards Compatibility. Each n-bit component is left justified in a 16 bit little-endian word. The n least significant bits of the 16 bit word are zero bits described above. Here are the bits of the bit little-endian word in decreasing address order:.
The 2 X bits are zero bits, described above. Here are the four bit words in increasing address order:. Here are the bits of the four bit little-endian words in decreasing address order. We have numbered each component with its spatial order. The X bits are zero bits, described above. Each line of video data in an image buffer is padded out to the nearest 48 pixel byte boundary. So each line in the file occupies pixels bytes instead of pixels.
So each line in the file occupies exactly pixels bytes. This extension is always required when using the compression types in this document. It allows the reader to correctly display the video data and convert it to other formats.
The goal of the 'colr' extension is to let you map the numerical values of pixels in the file to a common representation of color in which images can be correctly compared, combined, and displayed.
The 'colr' extension is designed to work for multiple imaging applications such as video and print. Each application, driven by its own set of historical and economic realities, has its own set of parameters needed to map from pixel values to CIE XYZ. Therefore, the 'colr' extension has this format:. This is the color model used by Apple's ColorSync. The contents of this type are not defined in this document.
Contact Apple for more information on the 'prof' type 'colr' extension. A 'colr' extension of type 'nclc' is always required when using the compression types in this document. R, G, and B are tristimulus values e. For our purposes, the R, G, and B tristimulus values are normalized to the range [0,1]. The transferFunction parameter of the 'colr' extension identifies f for your QuickTime image. Rogowitz and T. Pappas Bellingham, Wash.
The matrix parameter of the 'colr' extension identifies the matrix for your QuickTime image. CIE xy chromaticity coordinates of red primary, green primary, blue primary and white point. ITU-T H. If you are told your image is coded with these values, you almost definitely want the SMPTE C primaries code 6 above instead:. EBU Tech. Here are the values of transferFunction function f in the diagram above.
Transfer function is identical for red, green, and blue, and is shown as a mapping from W e. This information is both incomplete and obsolete. The 'colr' extension supersedes the previously defined 'gama' ImageDescription extension. Writers of QuickTime files should never write both into an ImageDescription, and readers of QuickTime files should ignore 'gama' if 'colr' is present.
This operation is currently not representable or supported by the 'colr' parameters. Here are the values for matrix. If you are told your image is coded with these values, you almost definitely want the SMPTE M values code 6 above instead:. Some digital video signals can carry a video index see SMPTE RP which explicitly labels the primaries, transferFunction , and matrix of the signal. If your video digitizing hardware can read video index, then use those values to fill in the 'colr' extension instead of the default values above.
If your video output hardware can insert video index, consider doing so according to the 'colr' extension fields. It defines the temporal and spatial relationship of each of the height lines of the data in an image buffer. This information is required to correctly display and process the video data e.
Lines so numbered are called picture lines. On a display, picture lines are evenly spaced with no gaps. For interlaced images, a line in one field is vertically located exactly halfway between the next higher and lower lines in the other field.
As you move from the top picture line to the bottom picture line of an interlaced buffer or display, you alternate between fields on each line. For every setting of fields and detail, we will define S n and T n , which map from address order to spatial and temporal order, respectively. None of the bits above directly imply a video signal field type F1 or F2. The terms F1 and F2 refer to a characteristic of the electrical waveform of the video signal for a given field.
In order to determine whether a field in a QuickTime file is F1 or F2, you must know the type of video signal and the intended position of the QuickTime buffer in the video raster. Given a video signal type, you can derive the raster position from the 'clap' ImageDescription extension described below. None of the bits above correspond to the often-misused term "field dominance. Field dominance is either F1 dominant edits fall between an F2 and a subsequent F1 or F2 dominant edits fall between an F1 and a subsequent F2.
Once video footage is placed in a QuickTime file, its fields have been grouped into frames, and edits are only possible on one boundary or the other. So to determine the field dominance of video footage in a QuickTime file, determine the field type F1 or F2 of the temporally earlier field in each frame of the QuickTime file.
The temporally earlier field of each QuickTime frame is a dominant field and the temporally later field of each QuickTime frame is a non-dominant field. Every QuickTime image has a "pixel aspect ratio," which is the horizontal spacing of luma sampling instants vs. It is specified by the 'pasp' ImageDescription extension. Applications must know the pixel aspect ratio in order to draw round circles, lines at a specified angle, etc.
Every QuickTime image has a "clean aperture," which is a reference rectangle specified relative to the width pixels and height lines of the image by the required 'clap' ImageDescription extension. Using the pixel aspect ratio and clean aperture dimensions, we can derive the "picture aspect ratio," which is the horizontal to vertical distance ratio of the clean aperture on a display device.
The picture aspect ratio is typically or The clean aperture is used to relate locations in two QuickTime images. Given two QuickTime images with identical picture aspect ratio, you can assume that the top left corner of the clean aperture of each image is coincident, and the bottom right corner of the clean aperture of each image is coincident.
The clean aperture also provides a deterministic mapping between a QuickTime image and the region of the video signal as seen on a display device from which it was captured or to which it will be played.
Each video interface standard e. You can think of a video interface standard's clean aperture as a fixed rectangular region on a display device of that standard. Given a QuickTime image and a QuickTime video component implementing a particular interface standard e. That is,. An example. Say a user imports two clips with a picture aspect ratio into an NLE application and performs a transition such as a cross-fade between the two clips.
The application needs to know how to align the pixels of each source clip in the final result. Even if both clips have the same width and height, they don't necessarily represent the same region of the display device or, equivalently, the video signal.
This is because the clips may have been captured on different video digitizer devices which digitize different regions of the incoming video signal. The user notices that one or both of the clips appear "shifted" in the final output. Multiple generations of processing produce even more pronounced shifts. The 'clap' ImageDescription extension eliminates this problem. First, QuickTime video digitizers label captured clips with the location of the clean aperture from the original video signal.
Next, when the user imports these clips and performs a transition, the NLE application uses the clean aperture of each input clip to figure out how the pixels of each clip correspond. The NLE application produces an output clip with a clean aperture corresponding to the two input clips.
Finally, the NLE application plays back the result, and the QuickTime image decompressor or video output component aligns the clean aperture of the clip to that of the output video signal. The clean aperture provides the missing piece of information so that the user sees no shifts. Ideally, all applications and devices would digitize and output the same region of the video signal so that the clean aperture would be fixed relative to the width pixels and height lines of the image.
Unfortunately this is far from the case in the industry today. The 'clap' ImageDescription extension provides the necessary information so that correct interchange of files is possible. In The Production Level of Conformance, this document specifies a greatly constrained set of ImageDescription parameters which all new devices and applications should support at a minimum, so that interchange of files can be simple and inexpensive.
Even applications that produce synthetic imagery e. The application should allow the user to position objects relative to the typically or clean aperture.
For example, this allows the output of multiple animation applications to be combined without manual alignment. And it allows an application to synthesize imagery using data originally derived from video signals e. The region of the video raster that is free from artifacts such as black bars from DV cameras or garbage bars from compression chips.
The three items above can be carried by other ImageDescription extensions, but they are outside the scope of this document. The size and location of the clean aperture is fixed for a given video standard and sampling frequency; it does not depend on the capturing equipment used or the image content of the captured signal.
0コメント