Merge branch 'master' into overlay-layout

pull/421/head
Giacomo Randazzo 7 years ago
commit b82e6ca60b
  1. 3
      .github/ISSUE_TEMPLATE/question.md
  2. 2
      CHANGELOG_V1.md
  3. 5
      README.md
  4. 2
      cameraview/build.gradle
  5. 21
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraOptions1Test.java
  6. 22
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraPreviewTest.java
  7. 2
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraViewCallbacksTest.java
  8. 34
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraViewTest.java
  9. 2
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/IntegrationTest.java
  10. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/MockCameraController.java
  11. 3
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/PictureResultTest.java
  12. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/TextureCameraPreviewTest.java
  13. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/VideoResultTest.java
  14. 279
      cameraview/src/main/gles/com/otaliastudios/cameraview/AudioMediaEncoder.java
  15. 15
      cameraview/src/main/gles/com/otaliastudios/cameraview/ByteBufferPool.java
  16. 1
      cameraview/src/main/gles/com/otaliastudios/cameraview/EglBaseSurface.java
  17. 20
      cameraview/src/main/gles/com/otaliastudios/cameraview/EglCore.java
  18. 12
      cameraview/src/main/gles/com/otaliastudios/cameraview/InputBuffer.java
  19. 15
      cameraview/src/main/gles/com/otaliastudios/cameraview/InputBufferPool.java
  20. 50
      cameraview/src/main/gles/com/otaliastudios/cameraview/MediaCodecBuffers.java
  21. 297
      cameraview/src/main/gles/com/otaliastudios/cameraview/MediaEncoder.java
  22. 143
      cameraview/src/main/gles/com/otaliastudios/cameraview/MediaEncoderEngine.java
  23. 11
      cameraview/src/main/gles/com/otaliastudios/cameraview/OutputBuffer.java
  24. 18
      cameraview/src/main/gles/com/otaliastudios/cameraview/OutputBufferPool.java
  25. 89
      cameraview/src/main/gles/com/otaliastudios/cameraview/Pool.java
  26. 108
      cameraview/src/main/gles/com/otaliastudios/cameraview/TextureMediaEncoder.java
  27. 19
      cameraview/src/main/gles/com/otaliastudios/cameraview/VideoMediaEncoder.java
  28. 95
      cameraview/src/main/java/com/otaliastudios/cameraview/Camera1.java
  29. 109
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraController.java
  30. 13
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraOptions.java
  31. 76
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraView.java
  32. 11
      cameraview/src/main/java/com/otaliastudios/cameraview/PictureResult.java
  33. 18
      cameraview/src/main/java/com/otaliastudios/cameraview/SnapshotPictureRecorder.java
  34. 28
      cameraview/src/main/java/com/otaliastudios/cameraview/SnapshotVideoRecorder.java
  35. 1
      cameraview/src/main/java/com/otaliastudios/cameraview/VideoRecorder.java
  36. 11
      cameraview/src/main/java/com/otaliastudios/cameraview/VideoResult.java
  37. 3
      cameraview/src/main/res/values/attrs.xml
  38. 10
      cameraview/src/main/utils/com/otaliastudios/cameraview/WorkerHandler.java
  39. 25
      cameraview/src/main/views/com/otaliastudios/cameraview/CameraPreview.java
  40. 8
      cameraview/src/main/views/com/otaliastudios/cameraview/GlCameraPreview.java
  41. 6
      cameraview/src/main/views/com/otaliastudios/cameraview/SurfaceCameraPreview.java
  42. 10
      cameraview/src/main/views/com/otaliastudios/cameraview/TextureCameraPreview.java
  43. 4
      codecov.yml
  44. 4
      demo/build.gradle
  45. 4
      demo/src/main/AndroidManifest.xml
  46. 1
      demo/src/main/java/com/otaliastudios/cameraview/demo/CameraActivity.java
  47. 15
      demo/src/main/java/com/otaliastudios/cameraview/demo/PicturePreviewActivity.java
  48. 7
      demo/src/main/java/com/otaliastudios/cameraview/demo/VideoPreviewActivity.java
  49. 8
      docs/_posts/2018-12-20-capture-size.md
  50. 4
      docs/_posts/2018-12-20-capturing-media.md
  51. 16
      docs/_posts/2018-12-20-changelog.md
  52. 2
      docs/_posts/2018-12-20-debugging.md
  53. 2
      docs/_posts/2018-12-20-error-handling.md
  54. 2
      docs/_posts/2018-12-20-install.md
  55. 2
      docs/_posts/2018-12-20-more-features.md
  56. 12
      docs/_posts/2018-12-20-preview-size.md
  57. 9
      docs/_posts/2018-12-20-runtime-permissions.md
  58. 58
      docs/_posts/2019-02-24-snapshot-size.md

@ -10,3 +10,6 @@ assignees: ''
### How do I?
Describe your problem here. Please, read the docs first.
Questions not strictly related to CameraView should be asked elsewhere.
### Version used
CameraView exact version.

@ -1,3 +1,5 @@
For v2 changelogs, please take a look at the [website](https://natario1.github.io/CameraView/about/changelog.html).
## v1.6.1
This is the last release before v2.

@ -1,5 +1,8 @@
[![Build Status](https://travis-ci.org/natario1/CameraView.svg?branch=master)](https://travis-ci.org/natario1/CameraView)
[![Code Coverage](https://codecov.io/gh/natario1/CameraView/branch/master/graph/badge.svg)](https://codecov.io/gh/natario1/CameraView)
[![Release](https://img.shields.io/github/release/natario1/CameraView.svg)](https://github.com/natario1/CameraView/releases)
[![Issues](https://img.shields.io/github/issues-raw/natario1/CameraView.svg)](https://github.com/natario1/CameraView/issues)
[![Funding](https://img.shields.io/opencollective/all/CameraView.svg?colorB=r)](https://natario1.github.io/CameraView/extra/donate)
*This is a new major version (v2) of the library. It includes breaking changes, signature changes and new functionality.
Keep reading if interested, or head to the legacy-v1 branch to read v1 documentation and info.*
@ -19,7 +22,7 @@ CameraView is a well documented, high-level library that makes capturing picture
addressing most of the common issues and needs, and still leaving you with flexibility where needed.
```groovy
compile 'com.otaliastudios:cameraview:2.0.0-beta01'
compile 'com.otaliastudios:cameraview:2.0.0-beta03'
```
- Fast & reliable

@ -3,7 +3,7 @@ apply plugin: 'com.github.dcendents.android-maven'
apply plugin: 'com.jfrog.bintray'
// Required by bintray
version = '2.0.0-beta01'
version = '2.0.0-beta03'
group = 'com.otaliastudios'
//region android dependencies

@ -130,6 +130,27 @@ public class CameraOptions1Test extends BaseTest {
}
}
@Test
public void testVideoSizesNull() {
// When videoSizes is null, we take the preview sizes.
List<Camera.Size> sizes = Arrays.asList(
mockCameraSize(100, 200),
mockCameraSize(50, 50),
mockCameraSize(1600, 900),
mockCameraSize(1000, 2000)
);
Camera.Parameters params = mock(Camera.Parameters.class);
when(params.getSupportedVideoSizes()).thenReturn(null);
when(params.getSupportedPreviewSizes()).thenReturn(sizes);
CameraOptions o = new CameraOptions(params, false);
Collection<Size> supportedSizes = o.getSupportedVideoSizes();
assertEquals(supportedSizes.size(), sizes.size());
for (Camera.Size size : sizes) {
Size internalSize = new Size(size.width, size.height);
assertTrue(supportedSizes.contains(internalSize));
}
}
@Test
public void testVideoSizesFlip() {
List<Camera.Size> sizes = Arrays.asList(

@ -102,17 +102,17 @@ public abstract class CameraPreviewTest extends BaseTest {
@Test
public void testDesiredSize() {
preview.setInputStreamSize(160, 90, false);
assertEquals(160, preview.getInputStreamSize().getWidth());
assertEquals(90, preview.getInputStreamSize().getHeight());
preview.setStreamSize(160, 90, false);
assertEquals(160, preview.getStreamSize().getWidth());
assertEquals(90, preview.getStreamSize().getHeight());
}
@Test
public void testSurfaceAvailable() {
ensureAvailable();
verify(callback, times(1)).onSurfaceAvailable();
assertEquals(surfaceSize.getWidth(), preview.getOutputSurfaceSize().getWidth());
assertEquals(surfaceSize.getHeight(), preview.getOutputSurfaceSize().getHeight());
assertEquals(surfaceSize.getWidth(), preview.getSurfaceSize().getWidth());
assertEquals(surfaceSize.getHeight(), preview.getSurfaceSize().getHeight());
}
@Test
@ -121,8 +121,8 @@ public abstract class CameraPreviewTest extends BaseTest {
ensureDestroyed();
// This might be called twice in Texture because it overrides ensureDestroyed method
verify(callback, atLeastOnce()).onSurfaceDestroyed();
assertEquals(0, preview.getOutputSurfaceSize().getWidth());
assertEquals(0, preview.getOutputSurfaceSize().getHeight());
assertEquals(0, preview.getSurfaceSize().getWidth());
assertEquals(0, preview.getSurfaceSize().getHeight());
}
@Test
@ -146,7 +146,7 @@ public abstract class CameraPreviewTest extends BaseTest {
// Since desired is 'desired', let's fake a new view size that is consistent with it.
// Ensure crop is not happening anymore.
preview.mCropTask.listen();
preview.dispatchOnOutputSurfaceSizeChanged((int) (50f * desired), 50); // Wait...
preview.dispatchOnSurfaceSizeChanged((int) (50f * desired), 50); // Wait...
preview.mCropTask.await();
assertEquals(desired, getViewAspectRatioWithScale(), 0.01f);
assertFalse(preview.isCropping());
@ -154,19 +154,19 @@ public abstract class CameraPreviewTest extends BaseTest {
private void setDesiredAspectRatio(float desiredAspectRatio) {
preview.mCropTask.listen();
preview.setInputStreamSize((int) (10f * desiredAspectRatio), 10, false); // Wait...
preview.setStreamSize((int) (10f * desiredAspectRatio), 10, false); // Wait...
preview.mCropTask.await();
assertEquals(desiredAspectRatio, getViewAspectRatioWithScale(), 0.01f);
}
private float getViewAspectRatio() {
Size size = preview.getOutputSurfaceSize();
Size size = preview.getSurfaceSize();
return AspectRatio.of(size.getWidth(), size.getHeight()).toFloat();
}
private float getViewAspectRatioWithScale() {
Size size = preview.getOutputSurfaceSize();
Size size = preview.getSurfaceSize();
int newWidth = (int) (((float) size.getWidth()) * getCropScaleX());
int newHeight = (int) (((float) size.getHeight()) * getCropScaleY());
return AspectRatio.of(newWidth, newHeight).toFloat();

@ -62,7 +62,7 @@ public class CameraViewCallbacksTest extends BaseTest {
}
@Override
protected boolean checkPermissions(@NonNull Mode mode, @NonNull Audio audio) {
protected boolean checkPermissions(@NonNull Audio audio) {
return true;
}
};

@ -50,7 +50,7 @@ public class CameraViewTest extends BaseTest {
}
@Override
protected boolean checkPermissions(@NonNull Mode mode, @NonNull Audio audio) {
protected boolean checkPermissions(@NonNull Audio audio) {
return hasPermissions;
}
};
@ -288,14 +288,14 @@ public class CameraViewTest extends BaseTest {
//region testMeasure
private void mockPreviewSize() {
private void mockPreviewStreamSize() {
Size size = new Size(900, 1600);
mockController.setMockPreviewSize(size);
mockController.setMockPreviewStreamSize(size);
}
@Test
public void testMeasure_early() {
mockController.setMockPreviewSize(null);
mockController.setMockPreviewStreamSize(null);
cameraView.measure(
makeMeasureSpec(500, EXACTLY),
makeMeasureSpec(500, EXACTLY));
@ -305,7 +305,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testMeasure_matchParentBoth() {
mockPreviewSize();
mockPreviewStreamSize();
// Respect parent/layout constraints on both dimensions.
cameraView.setLayoutParams(new ViewGroup.LayoutParams(MATCH_PARENT, MATCH_PARENT));
@ -331,7 +331,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testMeasure_wrapContentBoth() {
mockPreviewSize();
mockPreviewStreamSize();
// Respect parent constraints, but fit aspect ratio.
// Fit into a 160x160 parent so we espect final width to be 90.
@ -345,7 +345,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testMeasure_wrapContentSingle() {
mockPreviewSize();
mockPreviewStreamSize();
// Respect MATCH_PARENT on height, change width to fit the aspect ratio.
cameraView.setLayoutParams(new ViewGroup.LayoutParams(WRAP_CONTENT, MATCH_PARENT));
@ -366,7 +366,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testMeasure_scrollableContainer() {
mockPreviewSize();
mockPreviewStreamSize();
// Assume a vertical scroll view. It will pass UNSPECIFIED as height.
// We respect MATCH_PARENT on width (160), and enlarge height to match the aspect ratio.
@ -559,10 +559,10 @@ public class CameraViewTest extends BaseTest {
}
@Test
public void testPreviewSizeSelector() {
public void testPreviewStreamSizeSelector() {
SizeSelector source = SizeSelectors.minHeight(50);
cameraView.setPreviewSize(source);
SizeSelector result = mockController.getPreviewSizeSelector();
cameraView.setPreviewStreamSize(source);
SizeSelector result = mockController.getPreviewStreamSizeSelector();
assertNotNull(result);
assertEquals(result, source);
}
@ -661,5 +661,17 @@ public class CameraViewTest extends BaseTest {
//endregion
//region Snapshots
@Test
public void testSetSnapshotMaxSize() {
cameraView.setSnapshotMaxWidth(500);
cameraView.setSnapshotMaxHeight(1000);
assertEquals(mockController.mSnapshotMaxWidth, 500);
assertEquals(mockController.mSnapshotMaxHeight, 1000);
}
//endregion
// TODO: test permissions
}

@ -89,7 +89,7 @@ public class IntegrationTest extends BaseTest {
}
@After
public void tearDown() throws Exception {
public void tearDown() {
camera.stopVideo();
camera.destroy();
WorkerHandler.destroy();

@ -23,8 +23,8 @@ public class MockCameraController extends CameraController {
mCameraOptions = options;
}
void setMockPreviewSize(Size size) {
mPreviewSize = size;
void setMockPreviewStreamSize(Size size) {
mPreviewStreamSize = size;
}
void mockStarted(boolean started) {

@ -27,12 +27,14 @@ public class PictureResultTest extends BaseTest {
byte[] jpeg = new byte[]{2, 4, 1, 5, 2};
Location location = Mockito.mock(Location.class);
boolean isSnapshot = true;
Facing facing = Facing.FRONT;
result. format = format;
result.rotation = rotation;
result.size = size;
result.data = jpeg;
result.location = location;
result.facing = facing;
//noinspection ConstantConditions
result.isSnapshot = isSnapshot;
@ -43,5 +45,6 @@ public class PictureResultTest extends BaseTest {
assertEquals(result.getLocation(), location);
//noinspection ConstantConditions
assertEquals(result.isSnapshot(), isSnapshot);
assertEquals(result.getFacing(), facing);
}
}

@ -23,7 +23,7 @@ public class TextureCameraPreviewTest extends CameraPreviewTest {
if (isHardwareAccelerated()) {
super.ensureAvailable();
} else {
preview.dispatchOnOutputSurfaceAvailable(
preview.dispatchOnSurfaceAvailable(
surfaceSize.getWidth(),
surfaceSize.getHeight());
}
@ -34,7 +34,7 @@ public class TextureCameraPreviewTest extends CameraPreviewTest {
super.ensureDestroyed();
if (!isHardwareAccelerated()) {
// Ensure it is called.
preview.dispatchOnOutputSurfaceDestroyed();
preview.dispatchOnSurfaceDestroyed();
}
}

@ -36,6 +36,7 @@ public class VideoResultTest extends BaseTest {
int videoBitRate = 300000;
int audioBitRate = 30000;
Audio audio = Audio.ON;
Facing facing = Facing.FRONT;
result.file = file;
result.rotation = rotation;
@ -50,6 +51,7 @@ public class VideoResultTest extends BaseTest {
result.videoBitRate = videoBitRate;
result.audioBitRate = audioBitRate;
result.audio = audio;
result.facing = facing;
assertEquals(result.getFile(), file);
assertEquals(result.getRotation(), rotation);
@ -64,5 +66,7 @@ public class VideoResultTest extends BaseTest {
assertEquals(result.getVideoBitRate(), videoBitRate);
assertEquals(result.getAudioBitRate(), audioBitRate);
assertEquals(result.getAudio(), audio);
assertEquals(result.getFacing(), facing);
}
}

@ -1,34 +1,64 @@
package com.otaliastudios.cameraview;
import android.annotation.SuppressLint;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTimestamp;
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.media.MediaRecorder;
import android.os.Build;
import android.os.Handler;
import android.os.Message;
import android.util.Log;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.RequiresApi;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.concurrent.LinkedBlockingQueue;
// TODO create onVideoRecordingStart/onVideoRecordingEnd callbacks
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
class AudioMediaEncoder extends MediaEncoder {
private static final String TAG = AudioMediaEncoder.class.getSimpleName();
private static final CameraLogger LOG = CameraLogger.create(TAG);
private static final String MIME_TYPE = "audio/mp4a-latm";
private static final int SAMPLE_RATE = 44100; // 44.1[KHz] is only setting guaranteed to be available on all devices.
public static final int SAMPLES_PER_FRAME = 1024; // AAC, bytes/frame/channel
public static final int FRAMES_PER_BUFFER = 25; // AAC, frame/buffer/sec
private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; // Determines the SAMPLE_SIZE
private static final int CHANNELS = AudioFormat.CHANNEL_IN_MONO; // AudioFormat.CHANNEL_IN_STEREO;
// The 44.1KHz frequency is the only setting guaranteed to be available on all devices.
private static final int SAMPLING_FREQUENCY = 44100; // samples/sec
private static final int CHANNELS_COUNT = 1; // 2;
private static final int SAMPLE_SIZE = 2; // byte/sample/channel
private static final int BYTE_RATE_PER_CHANNEL = SAMPLING_FREQUENCY * SAMPLE_SIZE; // byte/sec/channel
private static final int BYTE_RATE = BYTE_RATE_PER_CHANNEL * CHANNELS_COUNT; // byte/sec
static final int BIT_RATE = BYTE_RATE * 8; // bit/sec
// We call FRAME here the chunk of data that we want to read at each loop cycle
private static final int FRAME_SIZE_PER_CHANNEL = 1024; // bytes/frame/channel [AAC constant]
private static final int FRAME_SIZE = FRAME_SIZE_PER_CHANNEL * CHANNELS_COUNT; // bytes/frame
// We allocate buffers of 1KB each, which is not so much. I would say that allocating
// at most 200 of them is a reasonable value. With the current setup, in device tests,
// we manage to use 50 at most.
private static final int BUFFER_POOL_MAX_SIZE = 200;
private final Object mLock = new Object();
private boolean mRequestStop = false;
private AudioEncodingHandler mEncoder;
private AudioRecordingThread mRecorder;
private ByteBufferPool mByteBufferPool;
private Config mConfig;
static class Config {
int bitRate;
Config(int bitRate) {
this.bitRate = bitRate;
}
@ -38,15 +68,20 @@ class AudioMediaEncoder extends MediaEncoder {
mConfig = config;
}
@NonNull
@Override
String getName() {
return "AudioEncoder";
}
@EncoderThread
@Override
void prepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
super.prepare(controller, maxLengthMillis);
final MediaFormat audioFormat = MediaFormat.createAudioFormat(MIME_TYPE, SAMPLE_RATE, 1);
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
final MediaFormat audioFormat = MediaFormat.createAudioFormat(MIME_TYPE, SAMPLING_FREQUENCY, CHANNELS_COUNT);
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_MASK, AudioFormat.CHANNEL_IN_MONO);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_MASK, CHANNELS);
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, mConfig.bitRate);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, 1);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, CHANNELS_COUNT);
try {
mMediaCodec = MediaCodec.createEncoderByType(MIME_TYPE);
} catch (IOException e) {
@ -54,86 +89,228 @@ class AudioMediaEncoder extends MediaEncoder {
}
mMediaCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mMediaCodec.start();
mByteBufferPool = new ByteBufferPool(FRAME_SIZE, BUFFER_POOL_MAX_SIZE);
mEncoder = new AudioEncodingHandler();
mRecorder = new AudioRecordingThread();
}
@EncoderThread
@Override
void start() {
void onStart() {
mRequestStop = false;
new AudioThread().start();
mRecorder.start();
}
@EncoderThread
@Override
void notify(@NonNull String event, @Nullable Object data) { }
void onEvent(@NonNull String event, @Nullable Object data) { }
@EncoderThread
@Override
void stop() {
void onStop() {
mRequestStop = true;
synchronized (mLock) {
try {
mLock.wait();
} catch (InterruptedException e) {
// do nothing
}
@Override
void onRelease() {
mRequestStop = false;
mEncoder = null;
mRecorder = null;
if (mByteBufferPool != null) {
mByteBufferPool.clear();
mByteBufferPool = null;
}
}
@Override
void release() {
super.release();
mRequestStop = false;
int getEncodedBitRate() {
return mConfig.bitRate;
}
class AudioThread extends Thread {
class AudioRecordingThread extends Thread {
private AudioRecord mAudioRecord;
private ByteBuffer mCurrentBuffer;
private int mReadBytes;
private long mLastTimeUs;
AudioThread() {
final int minBufferSize = AudioRecord.getMinBufferSize(
SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int bufferSize = SAMPLES_PER_FRAME * FRAMES_PER_BUFFER;
if (bufferSize < minBufferSize) {
bufferSize = ((minBufferSize / SAMPLES_PER_FRAME) + 1) * SAMPLES_PER_FRAME * 2;
AudioRecordingThread() {
final int minBufferSize = AudioRecord.getMinBufferSize(SAMPLING_FREQUENCY, CHANNELS, ENCODING);
int bufferSize = FRAME_SIZE * 25; // Make this bigger so we don't skip frames.
while (bufferSize < minBufferSize) {
bufferSize += FRAME_SIZE; // Unlikely I think.
}
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER, SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize);
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,
SAMPLING_FREQUENCY, CHANNELS, ENCODING, bufferSize);
setPriority(Thread.MAX_PRIORITY);
}
@Override
public void run() {
super.run();
mLastTimeUs = System.nanoTime() / 1000L;
mAudioRecord.startRecording();
final ByteBuffer buffer = ByteBuffer.allocateDirect(SAMPLES_PER_FRAME);
int readBytes;
while (!mRequestStop) {
buffer.clear();
readBytes = mAudioRecord.read(buffer, SAMPLES_PER_FRAME);
if (readBytes > 0) {
// set audio data to encoder
buffer.position(readBytes);
buffer.flip();
encode(buffer, readBytes, getPresentationTime());
drain(false);
}
read(false);
}
// This will signal the endOfStream.
LOG.w("RECORDER: Stop was requested. We're out of the loop. Will post an endOfStream.");
// Last input with 0 length. This will signal the endOfStream.
// Can't use drain(true); it is only available when writing to the codec InputSurface.
encode(null, 0, getPresentationTime());
drain(false);
read(true);
mAudioRecord.stop();
mAudioRecord.release();
mAudioRecord = null;
synchronized (mLock) {
mLock.notify();
}
private void read(boolean endOfStream) {
mCurrentBuffer = mByteBufferPool.get();
if (mCurrentBuffer == null) {
LOG.e("Skipping audio frame, encoding is too slow.");
// TODO should fix the next presentation time here. However this is
// extremely unlikely based on my tests. The mByteBufferPool should be big enough.
} else {
mCurrentBuffer.clear();
mReadBytes = mAudioRecord.read(mCurrentBuffer, FRAME_SIZE);
if (mReadBytes > 0) { // Good read: increase PTS.
increaseTime(mReadBytes);
mCurrentBuffer.limit(mReadBytes);
onBuffer(endOfStream);
} else if (mReadBytes == AudioRecord.ERROR_INVALID_OPERATION) {
LOG.e("Got AudioRecord.ERROR_INVALID_OPERATION");
} else if (mReadBytes == AudioRecord.ERROR_BAD_VALUE) {
LOG.e("Got AudioRecord.ERROR_BAD_VALUE");
}
}
}
/**
* New data at position buffer.position() of size buffer.remaining()
* has been written into this buffer. This method should pass the data
* to the consumer.
*/
private void onBuffer(boolean endOfStream) {
mEncoder.sendInputBuffer(mCurrentBuffer, mLastTimeUs, endOfStream);
}
private void increaseTime(int readBytes) {
increaseTime3(readBytes);
LOG.v("Read", readBytes, "bytes, increasing PTS to", mLastTimeUs);
}
/**
* This method simply assumes that we read everything without losing a single US.
* It will use System.nanoTime() just once, as the starting point.
* Of course we don't as there are things going on in this thread.
*/
private void increaseTime1(int readBytes) {
mLastTimeUs += (1000000L * readBytes) / BYTE_RATE;
}
/**
* Just for testing, this method will use Api 24 method to retrieve the timestamp.
* This way we let the platform choose instead of making assumptions.
*/
@RequiresApi(24)
private void increaseTime2(int readBytes) {
if (mApi24Timestamp == null) {
mApi24Timestamp = new AudioTimestamp();
}
mAudioRecord.getTimestamp(mApi24Timestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
mLastTimeUs = mApi24Timestamp.nanoTime / 1000;
}
private AudioTimestamp mApi24Timestamp;
/**
* This method looks like an improvement over {@link #increaseTime1(int)} as it
* accounts for the current time as well. Adapted & improved. from Kickflip.
*/
private void increaseTime3(int readBytes) {
long currentTime = System.nanoTime() / 1000;
long correctedTime;
long bufferDuration = (1000000 * readBytes) / BYTE_RATE;
long bufferTime = currentTime - bufferDuration; // delay of acquiring the audio buffer
if (mTotalReadBytes == 0) {
mStartTimeUs = bufferTime;
}
// Recompute time assuming that we are respecting the sampling frequency.
// However, if the correction is too big (> 2*bufferDuration), reset to this point.
correctedTime = mStartTimeUs + (1000000 * mTotalReadBytes) / BYTE_RATE;
if(bufferTime - correctedTime >= 2 * bufferDuration) {
mStartTimeUs = bufferTime;
mTotalReadBytes = 0;
correctedTime = mStartTimeUs;
}
mTotalReadBytes += readBytes;
mLastTimeUs = correctedTime;
}
private long mStartTimeUs;
private long mTotalReadBytes;
}
/**
* This will be a super busy thread. It's important for it to be:
* - different than the recording thread: or we would miss a lot of audio
* - different than the 'encoder' thread: we want that to be reactive.
* For example, a stop() must become onStop() soon, can't wait for all this draining.
*/
@SuppressLint("HandlerLeak")
class AudioEncodingHandler extends Handler {
InputBufferPool mInputBufferPool = new InputBufferPool();
LinkedBlockingQueue<InputBuffer> mPendingOps = new LinkedBlockingQueue<>();
AudioEncodingHandler() {
super(WorkerHandler.get("AudioEncodingHandler").getLooper());
}
void sendInputBuffer(ByteBuffer buffer, long presentationTimeUs, boolean endOfStream) {
int presentation1 = (int) (presentationTimeUs >> 32);
int presentation2 = (int) (presentationTimeUs);
sendMessage(obtainMessage(endOfStream ? 1 : 0, presentation1, presentation2, buffer));
}
@Override
int getBitRate() {
return mConfig.bitRate;
public void handleMessage(Message msg) {
super.handleMessage(msg);
boolean endOfStream = msg.what == 1;
long timestamp = (((long) msg.arg1) << 32) | (((long) msg.arg2) & 0xffffffffL);
ByteBuffer buffer = (ByteBuffer) msg.obj;
int readBytes = buffer.remaining();
InputBuffer inputBuffer = mInputBufferPool.get();
inputBuffer.source = buffer;
inputBuffer.timestamp = timestamp;
inputBuffer.length = readBytes;
inputBuffer.isEndOfStream = endOfStream;
mPendingOps.add(inputBuffer);
performPendingOps(endOfStream);
}
private void performPendingOps(boolean force) {
LOG.v("Performing", mPendingOps.size(), "Pending operations.");
InputBuffer buffer;
while ((buffer = mPendingOps.peek()) != null) {
if (force) {
acquireInputBuffer(buffer);
performPendingOp(buffer);
} else if (tryAcquireInputBuffer(buffer)) {
performPendingOp(buffer);
} else {
break; // Will try later.
}
}
}
private void performPendingOp(InputBuffer buffer) {
buffer.data.put(buffer.source);
mByteBufferPool.recycle(buffer.source);
mPendingOps.remove(buffer);
encodeInputBuffer(buffer);
boolean eos = buffer.isEndOfStream;
mInputBufferPool.recycle(buffer);
drainOutput(eos);
if (eos) {
mInputBufferPool.clear();
WorkerHandler.get("AudioEncodingHandler").getThread().interrupt();
}
}
}
}

@ -0,0 +1,15 @@
package com.otaliastudios.cameraview;
import java.nio.ByteBuffer;
class ByteBufferPool extends Pool<ByteBuffer> {
ByteBufferPool(final int bufferSize, int maxPoolSize) {
super(maxPoolSize, new Factory<ByteBuffer>() {
@Override
public ByteBuffer create() {
return ByteBuffer.allocateDirect(bufferSize);
}
});
}
}

@ -151,6 +151,7 @@ class EglBaseSurface extends EglElement {
/**
* Sends the presentation time stamp to EGL.
* https://www.khronos.org/registry/EGL/extensions/ANDROID/EGL_ANDROID_presentation_time.txt
*
* @param nsecs Timestamp, in nanoseconds.
*/

@ -137,7 +137,7 @@ final class EglCore {
int[] values = new int[1];
EGL14.eglQueryContext(mEGLDisplay, mEGLContext, EGL14.EGL_CONTEXT_CLIENT_VERSION,
values, 0);
Log.d(TAG, "EGLContext created, client version " + values[0]);
// Log.d(TAG, "EGLContext created, client version " + values[0]);
}
/**
@ -273,7 +273,7 @@ final class EglCore {
public void makeCurrent(EGLSurface eglSurface) {
if (mEGLDisplay == EGL14.EGL_NO_DISPLAY) {
// called makeCurrent() before create?
Log.d(TAG, "NOTE: makeCurrent w/o display");
// Log.d(TAG, "NOTE: makeCurrent w/o display");
}
if (!EGL14.eglMakeCurrent(mEGLDisplay, eglSurface, eglSurface, mEGLContext)) {
throw new RuntimeException("eglMakeCurrent failed");
@ -314,6 +314,7 @@ final class EglCore {
/**
* Sends the presentation time stamp to EGL. Time is expressed in nanoseconds.
* https://www.khronos.org/registry/EGL/extensions/ANDROID/EGL_ANDROID_presentation_time.txt
*/
public void setPresentationTime(EGLSurface eglSurface, long nsecs) {
EGLExt.eglPresentationTimeANDROID(mEGLDisplay, eglSurface, nsecs);
@ -350,21 +351,6 @@ final class EglCore {
return mGlVersion;
}
/**
* Writes the current display, context, and surface to the log.
*/
public static void logCurrent(String msg) {
EGLDisplay display;
EGLContext context;
EGLSurface surface;
display = EGL14.eglGetCurrentDisplay();
context = EGL14.eglGetCurrentContext();
surface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
Log.i(TAG, "Current EGL (" + msg + "): display=" + display + ", context=" + context +
", surface=" + surface);
}
/**
* Checks for EGL errors. Throws an exception if an error has been raised.
*/

@ -0,0 +1,12 @@
package com.otaliastudios.cameraview;
import java.nio.ByteBuffer;
class InputBuffer {
ByteBuffer data;
ByteBuffer source;
int index;
int length;
long timestamp;
boolean isEndOfStream;
}

@ -0,0 +1,15 @@
package com.otaliastudios.cameraview;
import java.nio.ByteBuffer;
class InputBufferPool extends Pool<InputBuffer> {
InputBufferPool() {
super(Integer.MAX_VALUE, new Factory<InputBuffer>() {
@Override
public InputBuffer create() {
return new InputBuffer();
}
});
}
}

@ -0,0 +1,50 @@
package com.otaliastudios.cameraview;
import android.media.MediaCodec;
import android.os.Build;
import java.nio.ByteBuffer;
/**
* A Wrapper to MediaCodec that facilitates the use of API-dependent get{Input/Output}Buffer methods,
* in order to prevent: http://stackoverflow.com/q/30646885
*/
class MediaCodecBuffers {
private final MediaCodec mMediaCodec;
private final ByteBuffer[] mInputBuffers;
private ByteBuffer[] mOutputBuffers;
MediaCodecBuffers(MediaCodec mediaCodec) {
mMediaCodec = mediaCodec;
if (Build.VERSION.SDK_INT < 21) {
mInputBuffers = mediaCodec.getInputBuffers();
mOutputBuffers = mediaCodec.getOutputBuffers();
} else {
mInputBuffers = mOutputBuffers = null;
}
}
public ByteBuffer getInputBuffer(final int index) {
if (Build.VERSION.SDK_INT >= 21) {
return mMediaCodec.getInputBuffer(index);
}
ByteBuffer buffer = mInputBuffers[index];
buffer.clear();
return buffer;
}
public ByteBuffer getOutputBuffer(final int index) {
if (Build.VERSION.SDK_INT >= 21) {
return mMediaCodec.getOutputBuffer(index);
}
return mOutputBuffers[index];
}
public void onOutputBuffersChanged() {
if (Build.VERSION.SDK_INT < 21) {
mOutputBuffers = mMediaCodec.getOutputBuffers();
}
}
}

@ -1,8 +1,10 @@
package com.otaliastudios.cameraview;
import android.annotation.SuppressLint;
import android.media.MediaCodec;
import android.media.MediaFormat;
import android.os.Build;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.RequiresApi;
@ -14,17 +16,107 @@ import java.nio.ByteBuffer;
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
abstract class MediaEncoder {
private final static int TIMEOUT_USEC = 10000; // 10 msec
private final static String TAG = MediaEncoder.class.getSimpleName();
private final static CameraLogger LOG = CameraLogger.create(TAG);
// Did some test to see which value would maximize our performance in the current setup (infinite audio pool).
// Measured the time it would take to write a 30 seconds video. Based on this, we'll go with TIMEOUT=0 for now.
// INPUT_TIMEOUT_US 10000: 46 seconds
// INPUT_TIMEOUT_US 1000: 37 seconds
// INPUT_TIMEOUT_US 100: 33 seconds
// INPUT_TIMEOUT_US 0: 32 seconds
private final static int INPUT_TIMEOUT_US = 0;
// 0 also seems to be the best, although it does not change so much.
// Can't go too high or this is a bottleneck for the audio encoder.
private final static int OUTPUT_TIMEOUT_US = 0;
@SuppressWarnings("WeakerAccess")
protected MediaCodec mMediaCodec;
private MediaCodec.BufferInfo mBufferInfo;
@SuppressWarnings("WeakerAccess")
protected WorkerHandler mWorker;
private MediaEncoderEngine.Controller mController;
private int mTrackIndex;
private OutputBufferPool mOutputBufferPool;
private MediaCodec.BufferInfo mBufferInfo;
private MediaCodecBuffers mBuffers;
private long mMaxLengthMillis;
private boolean mMaxLengthReached;
/**
* A readable name for the thread.
*/
@NonNull
abstract String getName();
/**
* This encoder was attached to the engine. Keep the controller
* and run the internal thread.
*/
final void prepare(@NonNull final MediaEncoderEngine.Controller controller, final long maxLengthMillis) {
mController = controller;
mBufferInfo = new MediaCodec.BufferInfo();
mMaxLengthMillis = maxLengthMillis;
mWorker = WorkerHandler.get(getName());
LOG.i(getName(), "Prepare was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.i(getName(), "Prepare was called. Executing.");
onPrepare(controller, maxLengthMillis);
}
});
}
/**
* Start recording. This might be a lightweight operation
* in case the encoder needs to wait for a certain event
* like a "frame available".
*/
final void start() {
LOG.i(getName(), "Start was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.i(getName(), "Start was called. Executing.");
onStart();
}
});
}
/**
* The caller notifying of a certain event occurring.
* Should analyze the string and see if the event is important.
* @param event what happened
* @param data object
*/
final void notify(final @NonNull String event, final @Nullable Object data) {
LOG.i(getName(), "Notify was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.i(getName(), "Notify was called. Executing.");
onEvent(event, data);
}
});
}
/**
* Stop recording.
*/
final void stop() {
LOG.i(getName(), "Stop was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.i(getName(), "Stop was called. Executing.");
onStop();
}
});
}
/**
* Called to prepare this encoder before starting.
* Any initialization should be done here as it does not interfere with the original
@ -33,13 +125,10 @@ abstract class MediaEncoder {
* At this point subclasses MUST create the {@link #mMediaCodec} object.
*
* @param controller the muxer controller
* @param maxLengthMillis the maxLength in millis
*/
@EncoderThread
void prepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
mController = controller;
mBufferInfo = new MediaCodec.BufferInfo();
mMaxLengthMillis = maxLengthMillis;
}
abstract void onPrepare(@NonNull final MediaEncoderEngine.Controller controller, final long maxLengthMillis);
/**
* Start recording. This might be a lightweight operation
@ -47,7 +136,7 @@ abstract class MediaEncoder {
* like a "frame available".
*/
@EncoderThread
abstract void start();
abstract void onStart();
/**
* The caller notifying of a certain event occurring.
@ -56,97 +145,130 @@ abstract class MediaEncoder {
* @param data object
*/
@EncoderThread
abstract void notify(@NonNull String event, @Nullable Object data);
abstract void onEvent(@NonNull String event, @Nullable Object data);
/**
* Stop recording.
* This MUST happen SYNCHRONOUSLY!
*/
@EncoderThread
abstract void stop();
abstract void onStop();
/**
* Release resources here.
* Called by {@link #drainOutput(boolean)} when we get an EOS signal (not necessarily in the
* parameters, might also be through an input buffer flag).
*/
@EncoderThread
void release() {
if (mMediaCodec != null) {
private void release() {
LOG.w("Subclass", getName(), "Notified that it is released.");
mController.requestRelease(mTrackIndex);
mMediaCodec.stop();
mMediaCodec.release();
mMediaCodec = null;
mOutputBufferPool.clear();
mOutputBufferPool = null;
mBuffers = null;
onRelease();
}
/**
* This is called when we are stopped.
* It is a good moment to release all resources, although the muxer
* might still be alive (we wait for the other Encoder, see Controller).
*/
abstract void onRelease();
/**
* Returns a new input buffer and index, waiting at most {@link #INPUT_TIMEOUT_US} if none is available.
* Callers should check the boolean result - true if the buffer was filled.
*/
@SuppressWarnings("WeakerAccess")
protected boolean tryAcquireInputBuffer(@NonNull InputBuffer holder) {
if (mBuffers == null) {
mBuffers = new MediaCodecBuffers(mMediaCodec);
}
int inputBufferIndex = mMediaCodec.dequeueInputBuffer(INPUT_TIMEOUT_US);
if (inputBufferIndex < 0) {
return false;
} else {
holder.index = inputBufferIndex;
holder.data = mBuffers.getInputBuffer(inputBufferIndex);
return true;
}
}
/**
* Returns a new input buffer and index, waiting indefinitely if none is available.
* The buffer should be written into, then the index should be passed to {@link #encodeInputBuffer(InputBuffer)}.
*/
@SuppressWarnings({"StatementWithEmptyBody", "WeakerAccess"})
protected void acquireInputBuffer(@NonNull InputBuffer holder) {
while (!tryAcquireInputBuffer(holder)) {}
}
/**
* Encode data into the {@link #mMediaCodec}.
*/
@SuppressWarnings("WeakerAccess")
protected void encode(@Nullable final ByteBuffer buffer, final int length, final long presentationTimeUs) {
final ByteBuffer[] inputBuffers = mMediaCodec.getInputBuffers();
while (true) {
final int inputBufferIndex = mMediaCodec.dequeueInputBuffer(TIMEOUT_USEC);
if (inputBufferIndex >= 0) {
final ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
if (buffer != null) {
inputBuffer.put(buffer);
}
if (length <= 0) { // send EOS
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, 0,
presentationTimeUs, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
protected void encodeInputBuffer(InputBuffer buffer) {
LOG.w("ENCODING:", getName(), "Buffer:", buffer.index, "Bytes:", buffer.length, "Presentation:", buffer.timestamp);
if (buffer.isEndOfStream) { // send EOS
mMediaCodec.queueInputBuffer(buffer.index, 0, 0,
buffer.timestamp, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
} else {
mMediaCodec.queueInputBuffer(inputBufferIndex, 0, length,
presentationTimeUs, 0);
}
break;
} else if (inputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
// wait for MediaCodec encoder is ready to encode
// nothing to do here because MediaCodec#dequeueInputBuffer(TIMEOUT_USEC)
// will wait for maximum TIMEOUT_USEC(10msec) on each call
mMediaCodec.queueInputBuffer(buffer.index, 0, buffer.length,
buffer.timestamp, 0);
}
}
/**
* Signals the end of input stream. This is a Video only API, as in the normal case,
* we use input buffers to signal the end. In the video case, we don't have input buffers
* because we use an input surface instead.
*/
@SuppressWarnings("WeakerAccess")
protected void signalEndOfInputStream() {
mMediaCodec.signalEndOfInputStream();
}
/**
* Extracts all pending data that was written and encoded into {@link #mMediaCodec},
* and forwards it to the muxer.
* <p>
* If endOfStream is not set, this returns when there is no more data to drain. If it
* is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
* Calling this with endOfStream set should be done once, right before stopping the muxer.
*
* If drainAll is not set, this returns after TIMEOUT_USEC if there is no more data to drain.
* If drainAll is set, we wait until we see EOS on the output.
* Calling this with drainAll set should be done once, right before stopping the muxer.
*/
@SuppressLint("LogNotTimber")
@SuppressWarnings("WeakerAccess")
protected void drain(boolean endOfStream) {
if (endOfStream) {
mMediaCodec.signalEndOfInputStream();
protected void drainOutput(boolean drainAll) {
LOG.w("DRAINING:", getName(), "EOS:", drainAll);
if (mMediaCodec == null) {
LOG.e("drain() was called before prepare() or after releasing.");
return;
}
if (mBuffers == null) {
mBuffers = new MediaCodecBuffers(mMediaCodec);
}
ByteBuffer[] encoderOutputBuffers = mMediaCodec.getOutputBuffers();
while (true) {
int encoderStatus = mMediaCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
int encoderStatus = mMediaCodec.dequeueOutputBuffer(mBufferInfo, OUTPUT_TIMEOUT_US);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) break; // out of while
if (!drainAll) break; // out of while
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
encoderOutputBuffers = mMediaCodec.getOutputBuffers();
mBuffers.onOutputBuffersChanged();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
if (mController.isStarted()) throw new RuntimeException("format changed twice");
if (mController.isStarted()) throw new RuntimeException("MediaFormat changed twice.");
MediaFormat newFormat = mMediaCodec.getOutputFormat();
// now that we have the Magic Goodies, start the muxer
mTrackIndex = mController.start(newFormat);
mTrackIndex = mController.requestStart(newFormat);
mOutputBufferPool = new OutputBufferPool(mTrackIndex);
} else if (encoderStatus < 0) {
Log.w("VideoMediaEncoder", "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
LOG.e("Unexpected result from dequeueOutputBuffer: " + encoderStatus);
// let's ignore it
} else {
ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
}
ByteBuffer encodedData = mBuffers.getOutputBuffer(encoderStatus);
// Codec config means that config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
@ -155,41 +277,56 @@ abstract class MediaEncoder {
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
mController.write(mTrackIndex, encodedData, mBufferInfo);
mLastPresentationTime = mBufferInfo.presentationTimeUs;
if (mStartPresentationTime == 0) {
mStartPresentationTime = mLastPresentationTime;
// Store startPresentationTime and lastPresentationTime, useful for example to
// detect the mMaxLengthReached and stop recording.
if (mStartPresentationTimeUs == Long.MIN_VALUE) {
mStartPresentationTimeUs = mBufferInfo.presentationTimeUs;
}
mLastPresentationTimeUs = mBufferInfo.presentationTimeUs;
// Pass presentation times as offets with respect to the mStartPresentationTimeUs.
// This ensures consistency between audio pts (coming from System.nanoTime()) and
// video pts (coming from SurfaceTexture) both of which have no meaningful time-base
// and should be used for offsets only.
// TODO find a better way, this causes sync issues. (+ note: this sends pts=0 at first)
// mBufferInfo.presentationTimeUs = mLastPresentationTimeUs - mStartPresentationTimeUs;
LOG.i("DRAINING:", getName(), "Dispatching write(). Presentation:", mBufferInfo.presentationTimeUs);
// TODO fix the mBufferInfo being the same, then implement delayed writing in Controller
// and remove the isStarted() check here.
OutputBuffer buffer = mOutputBufferPool.get();
buffer.info = mBufferInfo;
buffer.trackIndex = mTrackIndex;
buffer.data = encodedData;
mController.write(mOutputBufferPool, buffer);
}
mMediaCodec.releaseOutputBuffer(encoderStatus, false);
if (!mMaxLengthReached) {
if (mLastPresentationTime / 1000 - mStartPresentationTime / 1000 > mMaxLengthMillis) {
// Check for the maxLength constraint (with appropriate conditions)
// Not needed if drainAll because we already were asked to stop
if (!drainAll
&& !mMaxLengthReached
&& mStartPresentationTimeUs != Long.MIN_VALUE
&& mLastPresentationTimeUs - mStartPresentationTimeUs > mMaxLengthMillis * 1000) {
LOG.w("DRAINING: Reached maxLength! mLastPresentationTimeUs:", mLastPresentationTimeUs,
"mStartPresentationTimeUs:", mStartPresentationTimeUs,
"mMaxLengthUs:", mMaxLengthMillis * 1000);
mMaxLengthReached = true;
// Log.e("MediaEncoder", this.getClass().getSimpleName() + " requested stop at " + (mLastPresentationTime * 1000 * 1000));
mController.requestStop();
mController.requestStop(mTrackIndex);
break;
}
}
// Check for the EOS flag so we can release the encoder.
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
break; // out of while
LOG.w("DRAINING:", getName(), "Dispatching release().");
release();
break;
}
}
}
}
private long mStartPresentationTime = 0;
private long mLastPresentationTime = 0;
long getPresentationTime() {
long result = System.nanoTime() / 1000L;
// presentationTimeUs should be monotonic
// otherwise muxer fail to write
if (result < mLastPresentationTime) {
result = (mLastPresentationTime - result) + result;
}
return result;
}
private long mStartPresentationTimeUs = Long.MIN_VALUE;
private long mLastPresentationTimeUs = 0;
abstract int getBitRate();
abstract int getEncodedBitRate();
}

@ -1,6 +1,5 @@
package com.otaliastudios.cameraview;
import android.media.MediaCodec;
import android.media.MediaFormat;
import android.media.MediaMuxer;
import android.os.Build;
@ -10,13 +9,12 @@ import androidx.annotation.RequiresApi;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
class MediaEncoderEngine {
private final static String TAG = MediaEncoder.class.getSimpleName();
private final static String TAG = MediaEncoderEngine.class.getSimpleName();
private final static CameraLogger LOG = CameraLogger.create(TAG);
@SuppressWarnings("WeakerAccess")
@ -24,20 +22,19 @@ class MediaEncoderEngine {
final static int STOP_BY_MAX_DURATION = 1;
final static int STOP_BY_MAX_SIZE = 2;
private WorkerHandler mWorker;
private ArrayList<MediaEncoder> mEncoders;
private MediaMuxer mMediaMuxer;
private int mMediaMuxerStartCount;
private int mStartedEncodersCount;
private int mStoppedEncodersCount;
private boolean mMediaMuxerStarted;
private Controller mController;
private Listener mListener;
private int mStopReason = STOP_BY_USER;
private int mPossibleStopReason;
private final Object mLock = new Object();
private final Object mControllerLock = new Object();
MediaEncoderEngine(@NonNull File file, @NonNull VideoMediaEncoder videoEncoder, @Nullable AudioMediaEncoder audioEncoder,
final int maxDuration, final long maxSize, @Nullable Listener listener) {
mWorker = WorkerHandler.get("EncoderEngine");
mListener = listener;
mController = new Controller();
mEncoders = new ArrayList<>();
@ -50,17 +47,16 @@ class MediaEncoderEngine {
} catch (IOException e) {
throw new RuntimeException(e);
}
mMediaMuxerStartCount = 0;
mStartedEncodersCount = 0;
mMediaMuxerStarted = false;
mWorker.post(new Runnable() {
@Override
public void run() {
mStoppedEncodersCount = 0;
// Trying to convert the size constraints to duration constraints,
// because they are super easy to check.
// This is really naive & probably not accurate, but...
int bitRate = 0;
for (MediaEncoder encoder : mEncoders) {
bitRate += encoder.getBitRate();
bitRate += encoder.getEncodedBitRate();
}
int bytePerSecond = bitRate / 8;
long sizeMaxDuration = (maxSize / bytePerSecond) * 1000L;
@ -76,25 +72,29 @@ class MediaEncoderEngine {
mPossibleStopReason = STOP_BY_MAX_DURATION;
finalMaxDuration = maxDuration;
}
LOG.i("Computed a max duration of", (finalMaxDuration / 1000F));
LOG.w("Computed a max duration of", (finalMaxDuration / 1000F));
for (MediaEncoder encoder : mEncoders) {
encoder.prepare(mController, finalMaxDuration);
}
}
});
}
// Stuff here might be called from multiple threads.
class Controller {
int start(MediaFormat format) {
synchronized (mLock) {
/**
* Request that the muxer should start. This is not guaranteed to be executed:
* we wait for all encoders to call this method, and only then, start the muxer.
* @param format the media format
* @return the encoder track index
*/
int requestStart(MediaFormat format) {
synchronized (mControllerLock) {
if (mMediaMuxerStarted) {
throw new IllegalStateException("Trying to start but muxer started already");
}
int track = mMediaMuxer.addTrack(format);
mMediaMuxerStartCount++;
if (mMediaMuxerStartCount == mEncoders.size()) {
LOG.w("Controller:", "Assigned track", track, "to format", format.getString(MediaFormat.KEY_MIME));
if (++mStartedEncodersCount == mEncoders.size()) {
mMediaMuxer.start();
mMediaMuxerStarted = true;
}
@ -102,63 +102,89 @@ class MediaEncoderEngine {
}
}
/**
* Whether the muxer is started.
* @return true if muxer was started
*/
boolean isStarted() {
synchronized (mLock) {
synchronized (mControllerLock) {
return mMediaMuxerStarted;
}
}
// Synchronization does not seem needed here.
void write(int track, ByteBuffer encodedData, MediaCodec.BufferInfo info) {
/**
* Writes the given data to the muxer. Should be called after {@link #isStarted()}
* returns true. Note: this seems to be thread safe, no lock.
* TODO cache values if not started yet, then apply later. Read comments in drain().
* Currently they are recycled instantly.
*/
void write(OutputBufferPool pool, OutputBuffer buffer) {
if (!mMediaMuxerStarted) {
throw new IllegalStateException("Trying to write before muxer started");
}
mMediaMuxer.writeSampleData(track, encodedData, info);
}
void requestStop() {
synchronized (mLock) {
mMediaMuxerStartCount--;
if (mMediaMuxerStartCount == 0) {
// This is a bad idea and causes crashes.
// if (info.presentationTimeUs < mLastTimestampUs) info.presentationTimeUs = mLastTimestampUs;
// mLastTimestampUs = info.presentationTimeUs;
LOG.v("Writing for track", buffer.trackIndex, ". Presentation:", buffer.info.presentationTimeUs);
mMediaMuxer.writeSampleData(buffer.trackIndex, buffer.data, buffer.info);
pool.recycle(buffer);
}
/**
* Requests that the engine stops. This is not executed until all encoders call
* this method, so it is a kind of soft request, just like {@link #requestStart(MediaFormat)}.
* To be used when maxLength / maxSize constraints are reached, for example.
*
* When this succeeds, {@link MediaEncoder#stop()} is called.
*/
void requestStop(int track) {
LOG.i("RequestStop was called for track", track);
synchronized (mControllerLock) {
if (--mStartedEncodersCount == 0) {
mStopReason = mPossibleStopReason;
stop();
}
}
}
/**
* Notifies that the encoder was stopped. After this is called by all encoders,
* we will actually stop the muxer.
*/
void requestRelease(int track) {
LOG.i("requestRelease was called for track", track);
synchronized (mControllerLock) {
if (++mStoppedEncodersCount == mEncoders.size()) {
release();
}
}
}
}
void start() {
mWorker.post(new Runnable() {
@Override
public void run() {
final void start() {
for (MediaEncoder encoder : mEncoders) {
encoder.start();
}
}
});
}
void notify(final String event, final Object data) {
mWorker.post(new Runnable() {
@Override
public void run() {
@SuppressWarnings("SameParameterValue")
final void notify(final String event, final Object data) {
for (MediaEncoder encoder : mEncoders) {
encoder.notify(event, data);
}
}
});
}
void stop() {
mWorker.post(new Runnable() {
@Override
public void run() {
/**
* This just asks the encoder to stop. We will wait for them to call {@link Controller#requestRelease(int)}
* to actually stop the muxer, as there might be async stuff going on.
*/
final void stop() {
for (MediaEncoder encoder : mEncoders) {
encoder.stop();
}
for (MediaEncoder encoder : mEncoders) {
encoder.release();
}
private void release() {
Exception error = null;
if (mMediaMuxer != null) {
// stop() throws an exception if you haven't fed it any data.
@ -172,13 +198,28 @@ class MediaEncoderEngine {
}
mMediaMuxer = null;
}
if (mListener != null) mListener.onEncoderStop(mStopReason, error);
mStopReason = STOP_BY_USER;
if (mListener != null) {
mListener.onEncoderStop(mStopReason, error);
mListener = null;
mMediaMuxerStartCount = 0;
}
mStopReason = STOP_BY_USER;
mStartedEncodersCount = 0;
mStoppedEncodersCount = 0;
mMediaMuxerStarted = false;
}
});
@NonNull
VideoMediaEncoder getVideoEncoder() {
return (VideoMediaEncoder) mEncoders.get(0);
}
@Nullable
AudioMediaEncoder getAudioEncoder() {
if (mEncoders.size() > 1) {
return (AudioMediaEncoder) mEncoders.get(1);
} else {
return null;
}
}
interface Listener {

@ -0,0 +1,11 @@
package com.otaliastudios.cameraview;
import android.media.MediaCodec;
import java.nio.ByteBuffer;
class OutputBuffer {
MediaCodec.BufferInfo info;
int trackIndex;
ByteBuffer data;
}

@ -0,0 +1,18 @@
package com.otaliastudios.cameraview;
import android.media.MediaCodec;
class OutputBufferPool extends Pool<OutputBuffer> {
OutputBufferPool(final int trackIndex) {
super(Integer.MAX_VALUE, new Factory<OutputBuffer>() {
@Override
public OutputBuffer create() {
OutputBuffer buffer = new OutputBuffer();
buffer.trackIndex = trackIndex;
buffer.info = new MediaCodec.BufferInfo();
return buffer;
}
});
}
}

@ -0,0 +1,89 @@
package com.otaliastudios.cameraview;
import java.util.concurrent.LinkedBlockingQueue;
import androidx.annotation.CallSuper;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
class Pool<T> {
private static final String TAG = Pool.class.getSimpleName();
private static final CameraLogger LOG = CameraLogger.create(TAG);
private int maxPoolSize;
private int activeCount;
private LinkedBlockingQueue<T> mQueue;
private Factory<T> factory;
interface Factory<T> {
T create();
}
Pool(int maxPoolSize, Factory<T> factory) {
this.maxPoolSize = maxPoolSize;
this.mQueue = new LinkedBlockingQueue<>(maxPoolSize);
this.factory = factory;
}
boolean canGet() {
return count() < maxPoolSize;
}
@Nullable
T get() {
T buffer = mQueue.poll();
if (buffer != null) {
activeCount++; // poll decreases, this fixes
LOG.v("GET: Reusing recycled item.", this);
return buffer;
}
if (!canGet()) {
LOG.v("GET: Returning null. Too much items requested.", this);
return null;
}
activeCount++;
LOG.v("GET: Creating a new item.", this);
return factory.create();
}
void recycle(@NonNull T item) {
LOG.v("RECYCLE: Recycling item.", this);
if (--activeCount < 0) {
throw new IllegalStateException("Trying to recycle an item which makes activeCount < 0." +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
}
if (!mQueue.offer(item)) {
throw new IllegalStateException("Trying to recycle an item while the queue is full. " +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
}
}
@NonNull
@Override
public String toString() {
return getClass().getSimpleName() + " -- count:" + count() + ", active:" + activeCount() + ", cached:" + cachedCount();
}
final int count() {
return activeCount() + cachedCount();
}
final int activeCount() {
return activeCount;
}
final int cachedCount() {
return mQueue.size();
}
@CallSuper
void clear() {
mQueue.clear();
}
}

@ -3,6 +3,8 @@ package com.otaliastudios.cameraview;
import android.opengl.EGLContext;
import android.opengl.Matrix;
import android.os.Build;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.RequiresApi;
@ -15,11 +17,6 @@ class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.Config>
final static String FRAME_EVENT = "frame";
static class Frame {
float[] transform;
float[] overlayTransform;
long timestamp;
}
static class Config extends VideoMediaEncoder.Config {
int textureId;
int overlayTextureId;
@ -52,15 +49,41 @@ class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.Config>
private EglCore mEglCore;
private EglWindowSurface mWindow;
private EglViewport mViewport;
private Pool<TextureFrame> mFramePool = new Pool<>(100, new Pool.Factory<TextureFrame>() {
@Override
public TextureFrame create() {
return new TextureFrame();
}
});
TextureMediaEncoder(@NonNull Config config) {
super(config);
}
static class TextureFrame {
private TextureFrame() {}
// Nanoseconds, in no meaningful time-base. Should be for offsets only.
// Typically coming from SurfaceTexture.getTimestamp().
long timestamp;
float[] transform = new float[16];
float[] overlayTransform = new float[16];
}
@NonNull
TextureFrame acquireFrame() {
if (!mFramePool.canGet()) {
throw new RuntimeException("Need more frames than this! Please increase the pool size.");
} else {
//noinspection ConstantConditions
return mFramePool.get();
}
}
@EncoderThread
@Override
void prepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
super.prepare(controller, maxLengthMillis);
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
super.onPrepare(controller, maxLengthMillis);
mEglCore = new EglCore(mConfig.eglContext, EglCore.FLAG_RECORDABLE);
mWindow = new EglWindowSurface(mEglCore, mSurface, true);
mWindow.makeCurrent(); // drawing will happen on the InputWindowSurface, which
@ -70,51 +93,29 @@ class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.Config>
@EncoderThread
@Override
void release() {
super.release();
if (mWindow != null) {
mWindow.release();
mWindow = null;
}
if (mViewport != null) {
mViewport.release(true);
mViewport = null;
}
if (mEglCore != null) {
mEglCore.release();
mEglCore = null;
}
void onStart() {
super.onStart();
// Nothing to do here. Waiting for the first frame.
}
@EncoderThread
@Override
void start() {
super.start();
// Nothing to do here. Waiting for the first frame.
void onEvent(@NonNull String event, @Nullable Object data) {
if (!event.equals(FRAME_EVENT)) return;
TextureFrame frame = (TextureFrame) data;
if (frame == null) return; // Should not happen
if (frame.timestamp == 0 || mFrameNum < 0) {
// The first condition comes from grafika.
// The second condition means we were asked to stop.
mFramePool.recycle(frame);
return;
}
@EncoderThread
@Override
void notify(@NonNull String event, @Nullable Object data) {
if (event.equals(FRAME_EVENT)) {
Frame frame = (Frame) data;
// Seeing this after device is toggled off/on with power button. The
// first frame back has a zero timestamp.
// MPEG4Writer thinks this is cause to abort() in native code, so it's very
// important that we just ignore the frame.
if (frame.timestamp == 0) return;
if (mFrameNum < 0) return;
mFrameNum++;
int arg1 = (int) (frame.timestamp >> 32);
int arg2 = (int) frame.timestamp;
long timestamp = (((long) arg1) << 32) | (((long) arg2) & 0xffffffffL);
float[] transform = frame.transform;
LOG.v("Incoming frame timestamp:", frame.timestamp);
// We must scale this matrix like GlCameraPreview does, because it might have some cropping.
// Scaling takes place with respect to the (0, 0, 0) point, so we must apply a Translation to compensate.
float[] transform = frame.transform;
float scaleX = mConfig.scaleX;
float scaleY = mConfig.scaleY;
float scaleTranslX = (1F - scaleX) / 2F;
@ -131,12 +132,29 @@ class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.Config>
Matrix.rotateM(transform, 0, mConfig.transformRotation, 0, 0, 1);
Matrix.translateM(transform, 0, -0.5F, -0.5F, 0);
drain(false);
drainOutput(false);
// Future note: passing scale values to the viewport? They are scaleX and scaleY,
// but flipped based on the mConfig.scaleFlipped boolean.
mViewport.drawFrame(mConfig.textureId, mConfig.overlayTextureId, transform, frame.overlayTransform);
mWindow.setPresentationTime(timestamp);
mWindow.setPresentationTime(frame.timestamp);
mWindow.swapBuffers();
mFramePool.recycle(frame);
}
@Override
void onRelease() {
mFramePool.clear();
if (mWindow != null) {
mWindow.release();
mWindow = null;
}
if (mViewport != null) {
mViewport.release(true);
mViewport = null;
}
if (mEglCore != null) {
mEglCore.release();
mEglCore = null;
}
}
}

@ -51,10 +51,15 @@ abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends Med
mConfig = config;
}
@NonNull
@Override
String getName() {
return "VideoEncoder";
}
@EncoderThread
@Override
void prepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
super.prepare(controller, maxLengthMillis);
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
MediaFormat format = MediaFormat.createVideoFormat(mConfig.mimeType, mConfig.width, mConfig.height);
// Set some properties. Failing to specify some of these can cause the MediaCodec
@ -62,6 +67,7 @@ abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends Med
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, mConfig.bitRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, mConfig.frameRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 6); // TODO
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 2);
format.setInteger("rotation-degrees", mConfig.rotation);
@ -79,20 +85,21 @@ abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends Med
@EncoderThread
@Override
void start() {
void onStart() {
// Nothing to do here. Waiting for the first frame.
mFrameNum = 0;
}
@EncoderThread
@Override
void stop() {
void onStop() {
mFrameNum = -1;
drain(true);
signalEndOfInputStream();
drainOutput(true);
}
@Override
int getBitRate() {
int getEncodedBitRate() {
return mConfig.bitRate;
}
}

@ -66,10 +66,13 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
});
}
// Preview surface is now available. If camera is open, set up.
/**
* Preview surface is now available. If camera is open, set up.
* At this point we are sure that mPreview is not null.
*/
@Override
public void onSurfaceAvailable() {
LOG.i("onSurfaceAvailable:", "Size is", mPreview.getOutputSurfaceSize());
LOG.i("onSurfaceAvailable:", "Size is", getPreviewSurfaceSize(REF_VIEW));
schedule(null, false, new Runnable() {
@Override
public void run() {
@ -80,23 +83,26 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
});
}
// Preview surface did change its size. Compute a new preview size.
// This requires stopping and restarting the preview.
/**
* Preview surface did change its size. Compute a new preview size.
* This requires stopping and restarting the preview.
* At this point we are sure that mPreview is not null.
*/
@Override
public void onSurfaceChanged() {
LOG.i("onSurfaceChanged, size is", mPreview.getOutputSurfaceSize());
LOG.i("onSurfaceChanged, size is", getPreviewSurfaceSize(REF_VIEW));
schedule(null, true, new Runnable() {
@Override
public void run() {
if (!mIsBound) return;
// Compute a new camera preview size.
Size newSize = computePreviewSize(sizesFromList(mCamera.getParameters().getSupportedPreviewSizes()));
if (newSize.equals(mPreviewSize)) return;
Size newSize = computePreviewStreamSize(sizesFromList(mCamera.getParameters().getSupportedPreviewSizes()));
if (newSize.equals(mPreviewStreamSize)) return;
// Apply.
LOG.i("onSurfaceChanged:", "Computed a new preview size. Going on.");
mPreviewSize = newSize;
mPreviewStreamSize = newSize;
stopPreview();
startPreview("onSurfaceChanged:");
}
@ -119,17 +125,22 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
return isCameraAvailable() && mPreview != null && mPreview.hasSurface() && !mIsBound;
}
// The act of binding an "open" camera to a "ready" preview.
// These can happen at different times but we want to end up here.
/**
* The act of binding an "open" camera to a "ready" preview.
* These can happen at different times but we want to end up here.
* At this point we are sure that mPreview is not null.
*/
@WorkerThread
private void bindToSurface() {
LOG.i("bindToSurface:", "Started");
Object output = mPreview.getOutput();
try {
if (mPreview.getOutputClass() == SurfaceHolder.class) {
if (output instanceof SurfaceHolder) {
mCamera.setPreviewDisplay((SurfaceHolder) output);
} else {
} else if (output instanceof SurfaceTexture) {
mCamera.setPreviewTexture((SurfaceTexture) output);
} else {
throw new RuntimeException("Unknown CameraPreview output class.");
}
} catch (IOException e) {
LOG.e("bindToSurface:", "Failed to bind.", e);
@ -137,20 +148,22 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
}
mCaptureSize = computeCaptureSize();
mPreviewSize = computePreviewSize(sizesFromList(mCamera.getParameters().getSupportedPreviewSizes()));
mPreviewStreamSize = computePreviewStreamSize(sizesFromList(mCamera.getParameters().getSupportedPreviewSizes()));
mIsBound = true;
}
@WorkerThread
private void unbindFromSurface() {
mIsBound = false;
mPreviewSize = null;
mPreviewStreamSize = null;
mCaptureSize = null;
try {
if (mPreview.getOutputClass() == SurfaceHolder.class) {
mCamera.setPreviewDisplay(null);
} else {
} else if (mPreview.getOutputClass() == SurfaceTexture.class) {
mCamera.setPreviewTexture(null);
} else {
throw new RuntimeException("Unknown CameraPreview output class.");
}
} catch (IOException e) {
LOG.e("unbindFromSurface", "Could not release surface", e);
@ -163,22 +176,31 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
// To be called when the preview size is setup or changed.
private void startPreview(String log) {
LOG.i(log, "Dispatching onCameraPreviewSizeChanged.");
mCameraCallbacks.onCameraPreviewSizeChanged();
LOG.i(log, "Dispatching onCameraPreviewStreamSizeChanged.");
mCameraCallbacks.onCameraPreviewStreamSizeChanged();
Size previewSize = getPreviewSize(REF_VIEW);
Size previewSize = getPreviewStreamSize(REF_VIEW);
boolean wasFlipped = flip(REF_SENSOR, REF_VIEW);
mPreview.setInputStreamSize(previewSize.getWidth(), previewSize.getHeight(), wasFlipped);
mPreview.setStreamSize(previewSize.getWidth(), previewSize.getHeight(), wasFlipped);
Camera.Parameters params = mCamera.getParameters();
mPreviewFormat = params.getPreviewFormat();
params.setPreviewSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); // <- not allowed during preview
params.setPreviewSize(mPreviewStreamSize.getWidth(), mPreviewStreamSize.getHeight()); // <- not allowed during preview
if (mMode == Mode.PICTURE) {
params.setPictureSize(mCaptureSize.getWidth(), mCaptureSize.getHeight()); // <- allowed
} else {
// mCaptureSize in this case is a video size. The available video sizes are not necessarily
// a subset of the picture sizes, so we can't use the mCaptureSize value: it might crash.
// However, the setPictureSize() passed here is useless : we don't allow HQ pictures in video mode.
// While this might be lifted in the future, for now, just use a picture capture size.
Size pictureSize = computeCaptureSize(Mode.PICTURE);
params.setPictureSize(pictureSize.getWidth(), pictureSize.getHeight());
}
mCamera.setParameters(params);
mCamera.setPreviewCallbackWithBuffer(null); // Release anything left
mCamera.setPreviewCallbackWithBuffer(this); // Add ourselves
mFrameManager.allocate(ImageFormat.getBitsPerPixel(mPreviewFormat), mPreviewSize);
mFrameManager.allocate(ImageFormat.getBitsPerPixel(mPreviewFormat), mPreviewStreamSize);
LOG.i(log, "Starting preview with startPreview().");
try {
@ -266,12 +288,12 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
}
if (mCamera != null) {
stopPreview();
unbindFromSurface();
if (mIsBound) unbindFromSurface();
destroyCamera();
}
mCameraOptions = null;
mCamera = null;
mPreviewSize = null;
mPreviewStreamSize = null;
mCaptureSize = null;
mIsBound = false;
LOG.w("onStop:", "Clean up.", "Returning.");
@ -431,8 +453,12 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
Camera.CameraInfo info = new Camera.CameraInfo();
Camera.getCameraInfo(mCameraId, info);
if (info.canDisableShutterSound) {
mCamera.enableShutterSound(mPlaySounds);
return true;
try {
// this method is documented to throw on some occasions. #377
return mCamera.enableShutterSound(mPlaySounds);
} catch (RuntimeException exception) {
return false;
}
}
}
if (mPlaySounds) {
@ -554,13 +580,17 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
result.location = mLocation;
result.rotation = offset(REF_SENSOR, REF_OUTPUT);
result.size = getPictureSize(REF_OUTPUT);
result.facing = mFacing;
mPictureRecorder = new FullPictureRecorder(result, Camera1.this, mCamera);
mPictureRecorder.take();
}
});
}
/**
* Just a note about the snapshot size - it is the PreviewStreamSize, cropped with the view ratio.
* @param viewAspectRatio the view aspect ratio
*/
@Override
void takePictureSnapshot(@NonNull final AspectRatio viewAspectRatio) {
LOG.v("takePictureSnapshot: scheduling");
@ -573,7 +603,8 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
PictureResult result = new PictureResult();
result.location = mLocation;
result.isSnapshot = true;
result.size = getPreviewSize(REF_OUTPUT); // Not the real size: it will be cropped to match the view ratio
result.facing = mFacing;
result.size = getUncroppedSnapshotSize(REF_OUTPUT); // Not the real size: it will be cropped to match the view ratio
result.rotation = offset(REF_SENSOR, REF_OUTPUT); // Actually it will be rotated and set to 0.
AspectRatio outputRatio = flip(REF_OUTPUT, REF_VIEW) ? viewAspectRatio.inverse() : viewAspectRatio;
// LOG.e("ROTBUG_pic", "aspectRatio (REF_VIEW):", viewAspectRatio);
@ -596,7 +627,7 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
Frame frame = mFrameManager.getFrame(data,
System.currentTimeMillis(),
offset(REF_SENSOR, REF_OUTPUT),
mPreviewSize,
mPreviewStreamSize,
mPreviewFormat);
mCameraCallbacks.dispatchFrame(frame);
}
@ -653,6 +684,7 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
videoResult.isSnapshot = false;
videoResult.codec = mVideoCodec;
videoResult.location = mLocation;
videoResult.facing = mFacing;
videoResult.rotation = offset(REF_SENSOR, REF_OUTPUT);
videoResult.size = flip(REF_SENSOR, REF_OUTPUT) ? mCaptureSize.flip() : mCaptureSize;
videoResult.audio = mAudio;
@ -677,6 +709,10 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
});
}
/**
* @param file the output file
* @param viewAspectRatio the view aspect ratio
*/
@SuppressLint("NewApi")
@Override
void takeVideoSnapshot(@NonNull final File file, @NonNull final AspectRatio viewAspectRatio) {
@ -697,6 +733,7 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
videoResult.isSnapshot = true;
videoResult.codec = mVideoCodec;
videoResult.location = mLocation;
videoResult.facing = mFacing;
videoResult.videoBitRate = mVideoBitRate;
videoResult.audioBitRate = mAudioBitRate;
videoResult.audio = mAudio;
@ -738,7 +775,7 @@ class Camera1 extends CameraController implements Camera.PreviewCallback, Camera
// Based on this we will use VO for everything. See if we get issues about distortion
// and maybe we can improve. The reason why this happen is beyond my understanding.
Size outputSize = getPreviewSize(REF_OUTPUT);
Size outputSize = getUncroppedSnapshotSize(REF_OUTPUT);
AspectRatio outputRatio = flip(REF_OUTPUT, REF_VIEW) ? viewAspectRatio.inverse() : viewAspectRatio;
Rect outputCrop = CropHelper.computeCrop(outputSize, outputRatio);
outputSize = new Size(outputCrop.width(), outputCrop.height());

@ -8,6 +8,7 @@ import android.os.Handler;
import android.os.Looper;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.VisibleForTesting;
import androidx.annotation.WorkerThread;
import java.io.File;
@ -50,10 +51,15 @@ abstract class CameraController implements
protected boolean mPlaySounds;
protected DisableOverlayFor mDisableOverlayFor;
@Nullable private SizeSelector mPreviewSizeSelector;
@Nullable private SizeSelector mPreviewStreamSizeSelector;
private SizeSelector mPictureSizeSelector;
private SizeSelector mVideoSizeSelector;
@VisibleForTesting(otherwise = VisibleForTesting.PRIVATE)
int mSnapshotMaxWidth = Integer.MAX_VALUE; // in REF_VIEW for consistency with SizeSelectors
@VisibleForTesting(otherwise = VisibleForTesting.PRIVATE)
int mSnapshotMaxHeight = Integer.MAX_VALUE; // in REF_VIEW for consistency with SizeSelectors
protected int mCameraId;
protected CameraOptions mCameraOptions;
protected Mapper mMapper;
@ -65,7 +71,7 @@ abstract class CameraController implements
protected int mVideoBitRate;
protected int mAudioBitRate;
protected Size mCaptureSize;
protected Size mPreviewSize;
protected Size mPreviewStreamSize;
protected int mPreviewFormat;
protected int mSensorOffset;
@ -92,7 +98,7 @@ abstract class CameraController implements
mFrameManager = new FrameManager(2, this);
}
void setPreview(CameraPreview cameraPreview) {
void setPreview(@NonNull CameraPreview cameraPreview) {
mPreview = cameraPreview;
mPreview.setSurfaceCallback(this);
}
@ -280,8 +286,8 @@ abstract class CameraController implements
mDeviceOrientation = deviceOrientation;
}
final void setPreviewSizeSelector(@Nullable SizeSelector selector) {
mPreviewSizeSelector = selector;
final void setPreviewStreamSizeSelector(@Nullable SizeSelector selector) {
mPreviewStreamSizeSelector = selector;
}
final void setPictureSizeSelector(@NonNull SizeSelector selector) {
@ -312,6 +318,14 @@ abstract class CameraController implements
mAudioBitRate = audioBitRate;
}
final void setSnapshotMaxWidth(int maxWidth) {
mSnapshotMaxWidth = maxWidth;
}
final void setSnapshotMaxHeight(int maxHeight) {
mSnapshotMaxHeight = maxHeight;
}
//endregion
//region Abstract setters and APIs
@ -424,8 +438,8 @@ abstract class CameraController implements
}
@Nullable
/* for tests */ final SizeSelector getPreviewSizeSelector() {
return mPreviewSizeSelector;
/* for tests */ final SizeSelector getPreviewStreamSizeSelector() {
return mPreviewStreamSizeSelector;
}
@NonNull
@ -501,19 +515,71 @@ abstract class CameraController implements
return offset(reference1, reference2) % 180 != 0;
}
@Nullable
final Size getPictureSize(@SuppressWarnings("SameParameterValue") int reference) {
if (mCaptureSize == null || mMode == Mode.VIDEO) return null;
return flip(REF_SENSOR, reference) ? mCaptureSize.flip() : mCaptureSize;
}
@Nullable
final Size getVideoSize(@SuppressWarnings("SameParameterValue") int reference) {
if (mCaptureSize == null || mMode == Mode.PICTURE) return null;
return flip(REF_SENSOR, reference) ? mCaptureSize.flip() : mCaptureSize;
}
final Size getPreviewSize(int reference) {
if (mPreviewSize == null) return null;
return flip(REF_SENSOR, reference) ? mPreviewSize.flip() : mPreviewSize;
@Nullable
final Size getPreviewStreamSize(int reference) {
if (mPreviewStreamSize == null) return null;
return flip(REF_SENSOR, reference) ? mPreviewStreamSize.flip() : mPreviewStreamSize;
}
@SuppressWarnings("SameParameterValue")
@Nullable
final Size getPreviewSurfaceSize(int reference) {
if (mPreview == null) return null;
return flip(REF_VIEW, reference) ? mPreview.getSurfaceSize().flip() : mPreview.getSurfaceSize();
}
/**
* Returns the snapshot size, but not cropped with the view dimensions, which
* is what we will do before creating the snapshot. However, cropping is done at various
* levels so we don't want to perform the op here.
*
* The base snapshot size is based on PreviewStreamSize (later cropped with view ratio). Why?
* One might be tempted to say that it is the SurfaceSize (which already matches the view ratio).
*
* The camera sensor will capture preview frames with PreviewStreamSize and that's it. Then they
* are hardware-scaled by the preview surface, but this does not affect the snapshot, as the
* snapshot recorder simply creates another surface.
*
* Done tests to ensure that this is true, by using
* 1. small SurfaceSize and biggest() PreviewStreamSize: output is not low quality
* 2. big SurfaceSize and smallest() PreviewStreamSize: output is low quality
* In both cases the result.size here was set to the biggest of the two.
*
* I could not find the same evidence for videos, but I would say that the same things should
* apply, despite the capturing mechanism being different.
*/
@Nullable
final Size getUncroppedSnapshotSize(int reference) {
Size baseSize = getPreviewStreamSize(reference);
if (baseSize == null) return null;
boolean flip = flip(reference, REF_VIEW);
int maxWidth = flip ? mSnapshotMaxHeight : mSnapshotMaxWidth;
int maxHeight = flip ? mSnapshotMaxWidth : mSnapshotMaxHeight;
float baseRatio = AspectRatio.of(baseSize).toFloat();
float maxValuesRatio = AspectRatio.of(maxWidth, maxHeight).toFloat();
if (maxValuesRatio >= baseRatio) {
// Height is the real constraint.
int outHeight = Math.min(baseSize.getHeight(), maxHeight);
int outWidth = (int) Math.floor((float) outHeight * baseRatio);
return new Size(outWidth, outHeight);
} else {
// Width is the real constraint.
int outWidth = Math.min(baseSize.getWidth(), maxWidth);
int outHeight = (int) Math.floor((float) outWidth / baseRatio);
return new Size(outWidth, outHeight);
}
}
@ -531,12 +597,17 @@ abstract class CameraController implements
@NonNull
@SuppressWarnings("WeakerAccess")
protected final Size computeCaptureSize() {
return computeCaptureSize(mMode);
}
@SuppressWarnings("WeakerAccess")
protected final Size computeCaptureSize(Mode mode) {
// We want to pass stuff into the REF_VIEW reference, not the sensor one.
// This is already managed by CameraOptions, so we just flip again at the end.
boolean flip = flip(REF_SENSOR, REF_VIEW);
SizeSelector selector;
Collection<Size> sizes;
if (mMode == Mode.PICTURE) {
if (mode == Mode.PICTURE) {
selector = mPictureSizeSelector;
sizes = mCameraOptions.getSupportedPictureSizes();
} else {
@ -546,14 +617,14 @@ abstract class CameraController implements
selector = SizeSelectors.or(selector, SizeSelectors.biggest());
List<Size> list = new ArrayList<>(sizes);
Size result = selector.select(list).get(0);
LOG.i("computeCaptureSize:", "result:", result, "flip:", flip);
LOG.i("computeCaptureSize:", "result:", result, "flip:", flip, "mode:", mode);
if (flip) result = result.flip(); // Go back to REF_SENSOR
return result;
}
@NonNull
@SuppressWarnings("WeakerAccess")
protected final Size computePreviewSize(@NonNull List<Size> previewSizes) {
protected final Size computePreviewStreamSize(@NonNull List<Size> previewSizes) {
// These sizes come in REF_SENSOR. Since there is an external selector involved,
// we must convert all of them to REF_VIEW, then flip back when returning.
boolean flip = flip(REF_SENSOR, REF_VIEW);
@ -562,12 +633,12 @@ abstract class CameraController implements
sizes.add(flip ? size.flip() : size);
}
// Create our own default selector, which will be used if the external mPreviewSizeSelector
// Create our own default selector, which will be used if the external mPreviewStreamSizeSelector
// is null, or if it fails in finding a size.
Size targetMinSize = mPreview.getOutputSurfaceSize();
Size targetMinSize = getPreviewSurfaceSize(REF_VIEW);
AspectRatio targetRatio = AspectRatio.of(mCaptureSize.getWidth(), mCaptureSize.getHeight());
if (flip) targetRatio = targetRatio.inverse();
LOG.i("size:", "computePreviewSize:", "targetRatio:", targetRatio, "targetMinSize:", targetMinSize);
LOG.i("size:", "computePreviewStreamSize:", "targetRatio:", targetRatio, "targetMinSize:", targetMinSize);
SizeSelector matchRatio = SizeSelectors.and( // Match this aspect ratio and sort by biggest
SizeSelectors.aspectRatio(targetRatio, 0),
SizeSelectors.biggest());
@ -585,14 +656,14 @@ abstract class CameraController implements
// Apply the external selector with this as a fallback,
// and return a size in REF_SENSOR reference.
SizeSelector selector;
if (mPreviewSizeSelector != null) {
selector = SizeSelectors.or(mPreviewSizeSelector, matchAll);
if (mPreviewStreamSizeSelector != null) {
selector = SizeSelectors.or(mPreviewStreamSizeSelector, matchAll);
} else {
selector = matchAll;
}
Size result = selector.select(sizes).get(0);
if (flip) result = result.flip();
LOG.i("computePreviewSize:", "result:", result, "flip:", flip);
LOG.i("computePreviewStreamSize:", "result:", result, "flip:", flip);
return result;
}

@ -88,7 +88,7 @@ public class CameraOptions {
exposureCorrectionSupported = params.getMinExposureCompensation() != 0
|| params.getMaxExposureCompensation() != 0;
// Sizes
// Picture Sizes
List<Camera.Size> sizes = params.getSupportedPictureSizes();
for (Camera.Size size : sizes) {
int width = flipSizes ? size.height : size.width;
@ -96,6 +96,8 @@ public class CameraOptions {
supportedPictureSizes.add(new Size(width, height));
supportedPictureAspectRatio.add(AspectRatio.of(width, height));
}
// Video Sizes
List<Camera.Size> vsizes = params.getSupportedVideoSizes();
if (vsizes != null) {
for (Camera.Size size : vsizes) {
@ -104,6 +106,15 @@ public class CameraOptions {
supportedVideoSizes.add(new Size(width, height));
supportedVideoAspectRatio.add(AspectRatio.of(width, height));
}
} else {
// StackOverflow threads seems to agree that if getSupportedVideoSizes is null, previews can be used.
List<Camera.Size> fallback = params.getSupportedPreviewSizes();
for (Camera.Size size : fallback) {
int width = flipSizes ? size.height : size.width;
int height = flipSizes ? size.width : size.height;
supportedVideoSizes.add(new Size(width, height));
supportedVideoAspectRatio.add(AspectRatio.of(width, height));
}
}

@ -378,7 +378,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
*/
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
Size previewSize = mCameraController.getPreviewSize(CameraController.REF_VIEW);
Size previewSize = mCameraController.getPreviewStreamSize(CameraController.REF_VIEW);
if (previewSize == null) {
LOG.w("onMeasure:", "surface is not ready. Calling default behavior.");
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
@ -669,8 +669,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
public void open() {
if (!isEnabled()) return;
if (mCameraPreview != null) mCameraPreview.onResume();
if (checkPermissions(getMode(), getAudio())) {
if (checkPermissions(getAudio())) {
// Update display orientation for current CameraController
mOrientationHelper.enable(getContext());
mCameraController.setDisplayOffset(mOrientationHelper.getDisplayOffset());
@ -680,15 +679,15 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
/**
* Checks that we have appropriate permissions for this session type.
* Throws if session = audio and manifest did not add the microphone permissions.
* @param mode the sessionType to be checked
* Checks that we have appropriate permissions.
* This means checking that we have audio permissions if audio = Audio.ON.
* @param audio the audio setting to be checked
* @return true if we can go on, false otherwise.
*/
@SuppressWarnings("ConstantConditions")
@SuppressLint("NewApi")
protected boolean checkPermissions(@NonNull Mode mode, @NonNull Audio audio) {
checkPermissionsManifestOrThrow(mode, audio);
protected boolean checkPermissions(@NonNull Audio audio) {
checkPermissionsManifestOrThrow(audio);
// Manifest is OK at this point. Let's check runtime permissions.
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) return true;
@ -708,12 +707,11 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
/**
* If mSessionType == SESSION_TYPE_VIDEO we will ask for RECORD_AUDIO permission.
* If audio is on we will ask for RECORD_AUDIO permission.
* If the developer did not add this to its manifest, throw and fire warnings.
* (Hoping this is not caught elsewhere... we should test).
*/
private void checkPermissionsManifestOrThrow(@NonNull Mode mode, @NonNull Audio audio) {
if (mode == Mode.VIDEO && audio == Audio.ON) {
private void checkPermissionsManifestOrThrow(@NonNull Audio audio) {
if (audio == Audio.ON) {
try {
PackageManager manager = getContext().getPackageManager();
PackageInfo info = manager.getPackageInfo(getContext().getPackageName(), PackageManager.GET_PERMISSIONS);
@ -722,7 +720,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
return;
}
}
LOG.e("Permission error:", "When the session type is set to video,",
LOG.e("Permission error:", "When audio is enabled (Audio.ON),",
"the RECORD_AUDIO permission should be added to the app manifest file.");
throw new IllegalStateException(CameraLogger.lastMessage);
} catch (PackageManager.NameNotFoundException e) {
@ -1096,7 +1094,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
// Check did took place, or will happen on start().
mCameraController.setAudio(audio);
} else if (checkPermissions(getMode(), audio)) {
} else if (checkPermissions(audio)) {
// Camera is running. Pass.
mCameraController.setAudio(audio);
@ -1151,15 +1149,15 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
* upscaling. If all you want is set an aspect ratio, use {@link #setPictureSize(SizeSelector)}
* and {@link #setVideoSize(SizeSelector)}.
*
* When size changes, the {@link CameraView} is remeasured so any WRAP_CONTENT dimension
* When stream size changes, the {@link CameraView} is remeasured so any WRAP_CONTENT dimension
* is recomputed accordingly.
*
* See the {@link SizeSelectors} class for handy utilities for creating selectors.
*
* @param selector a size selector
*/
public void setPreviewSize(@NonNull SizeSelector selector) {
mCameraController.setPreviewSizeSelector(selector);
public void setPreviewStreamSize(@NonNull SizeSelector selector) {
mCameraController.setPreviewStreamSizeSelector(selector);
}
@ -1172,22 +1170,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
* @param mode desired session type.
*/
public void setMode(@NonNull Mode mode) {
if (mode == getMode() || isClosed()) {
// Check did took place, or will happen on start().
mCameraController.setMode(mode);
} else if (checkPermissions(mode, getAudio())) {
// Camera is running. CameraImpl setMode will do the trick.
mCameraController.setMode(mode);
} else {
// This means that the audio permission is being asked.
// Stop the camera so it can be restarted by the developer onPermissionResult.
// Developer must also set the session type again...
// Not ideal but good for now.
close();
}
}
@ -1481,6 +1464,27 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
});
}
/**
* Sets the max width for snapshots taken with {@link #takePictureSnapshot()} or
* {@link #takeVideoSnapshot(File)}. If the snapshot width exceeds this value, the snapshot
* will be scaled down to match this constraint.
*
* @param maxWidth max width for snapshots
*/
public void setSnapshotMaxWidth(int maxWidth) {
mCameraController.setSnapshotMaxWidth(maxWidth);
}
/**
* Sets the max height for snapshots taken with {@link #takePictureSnapshot()} or
* {@link #takeVideoSnapshot(File)}. If the snapshot height exceeds this value, the snapshot
* will be scaled down to match this constraint.
*
* @param maxHeight max height for snapshots
*/
public void setSnapshotMaxHeight(int maxHeight) {
mCameraController.setSnapshotMaxHeight(maxHeight);
}
/**
* Returns the size used for snapshots, or null if it hasn't been computed
@ -1495,7 +1499,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
// Get the preview size and crop according to the current view size.
// It's better to do calculations in the REF_VIEW reference, and then flip if needed.
Size preview = mCameraController.getPreviewSize(CameraController.REF_VIEW);
Size preview = mCameraController.getUncroppedSnapshotSize(CameraController.REF_VIEW);
AspectRatio viewRatio = AspectRatio.of(getWidth(), getHeight());
Rect crop = CropHelper.computeCrop(preview, viewRatio);
Size cropSize = new Size(crop.width(), crop.height());
@ -1714,7 +1718,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
interface CameraCallbacks extends OrientationHelper.Callback {
void dispatchOnCameraOpened(CameraOptions options);
void dispatchOnCameraClosed();
void onCameraPreviewSizeChanged();
void onCameraPreviewStreamSizeChanged();
void onShutter(boolean shouldPlaySound);
void dispatchOnVideoTaken(VideoResult result);
void dispatchOnPictureTaken(PictureResult result);
@ -1759,8 +1763,8 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
}
@Override
public void onCameraPreviewSizeChanged() {
mLogger.i("onCameraPreviewSizeChanged");
public void onCameraPreviewStreamSizeChanged() {
mLogger.i("onCameraPreviewStreamSizeChanged");
// Camera preview size has changed.
// Request a layout pass for onMeasure() to do its stuff.
// Potentially this will change CameraView size, which changes Surface size,

@ -21,6 +21,7 @@ public class PictureResult {
Location location;
int rotation;
Size size;
Facing facing;
byte[] data;
int format;
@ -67,6 +68,16 @@ public class PictureResult {
return size;
}
/**
* Returns the facing value with which this video was recorded.
*
* @return the Facing of this video
*/
@NonNull
public Facing getFacing() {
return facing;
}
/**
* Returns the raw compressed, ready to be saved to file,
* in the given format.

@ -41,8 +41,9 @@ class SnapshotPictureRecorder extends PictureRecorder {
mCamera = camera;
mOutputRatio = outputRatio;
mFormat = mController.mPreviewFormat;
mSensorPreviewSize = mController.mPreviewSize;
mSensorPreviewSize = mController.mPreviewStreamSize;
mWithOverlay = mController.mDisableOverlayFor != DisableOverlayFor.PICTURE;
}
@Override
@ -136,13 +137,24 @@ class SnapshotPictureRecorder extends PictureRecorder {
Matrix.translateM(mTransform, 0, scaleTranslX, scaleTranslY, 0);
Matrix.scaleM(mTransform, 0, realScaleX, realScaleY, 1);
// Apply rotation:
// Fix rotation:
// TODO Not sure why we need the minus here... It makes no sense to me.
LOG.w("Recording frame. Rotation:", mResult.rotation, "Actual:", -mResult.rotation);
int rotation = -mResult.rotation;
mResult.rotation = 0;
// Go back to 0,0 so that rotate and flip work well.
Matrix.translateM(mTransform, 0, 0.5F, 0.5F, 0);
// Apply rotation:
Matrix.rotateM(mTransform, 0, rotation, 0, 0, 1);
// Flip horizontally for front camera:
if (mResult.facing == Facing.FRONT) {
Matrix.scaleM(mTransform, 0, -1, 1, 1);
}
// Go back to old position.
Matrix.translateM(mTransform, 0, -0.5F, -0.5F, 0);
// Future note: passing scale values to the viewport?
@ -204,7 +216,7 @@ class SnapshotPictureRecorder extends PictureRecorder {
// It seems that the buffers are already cleared here, so we need to allocate again.
camera.setPreviewCallbackWithBuffer(null); // Release anything left
camera.setPreviewCallbackWithBuffer(mController); // Add ourselves
mController.mFrameManager.allocate(ImageFormat.getBitsPerPixel(mFormat), mController.mPreviewSize);
mController.mFrameManager.allocate(ImageFormat.getBitsPerPixel(mFormat), mController.mPreviewStreamSize);
}
});
}

@ -65,8 +65,13 @@ class SnapshotVideoRecorder extends VideoRecorder implements GlCameraPreview.Ren
@Override
public void onRendererFrame(@NonNull SurfaceTexture surfaceTexture, SurfaceTexture overlaySurfaceTexture, float scaleX, float scaleY) {
if (mCurrentState == STATE_NOT_RECORDING && mDesiredState == STATE_RECORDING) {
// Set default options
if (mResult.videoBitRate <= 0) mResult.videoBitRate = DEFAULT_VIDEO_BITRATE;
if (mResult.videoFrameRate <= 0) mResult.videoFrameRate = DEFAULT_VIDEO_FRAMERATE;
if (mResult.audioBitRate <= 0) mResult.audioBitRate = DEFAULT_AUDIO_BITRATE;
// Video. Ensure width and height are divisible by 2, as I have read somewhere.
Size size = mResult.getSize();
// Ensure width and height are divisible by 2, as I have read somewhere.
int width = size.getWidth();
int height = size.getHeight();
width = width % 2 == 0 ? width : width + 1;
@ -77,9 +82,6 @@ class SnapshotVideoRecorder extends VideoRecorder implements GlCameraPreview.Ren
case H_264: type = "video/avc"; break; // MediaFormat.MIMETYPE_VIDEO_AVC:
case DEVICE_DEFAULT: type = "video/avc"; break;
}
if (mResult.videoBitRate <= 0) mResult.videoBitRate = DEFAULT_VIDEO_BITRATE;
if (mResult.audioBitRate <= 0) mResult.audioBitRate = DEFAULT_AUDIO_BITRATE;
if (mResult.videoFrameRate <= 0) mResult.videoFrameRate = DEFAULT_VIDEO_FRAMERATE;
LOG.w("Creating frame encoder. Rotation:", mResult.rotation);
TextureMediaEncoder.Config config = new TextureMediaEncoder.Config(width, height,
mResult.videoBitRate,
@ -92,10 +94,14 @@ class SnapshotVideoRecorder extends VideoRecorder implements GlCameraPreview.Ren
EGL14.eglGetCurrentContext()
);
TextureMediaEncoder videoEncoder = new TextureMediaEncoder(config);
// Audio
AudioMediaEncoder audioEncoder = null;
if (mResult.audio == Audio.ON) {
audioEncoder = new AudioMediaEncoder(new AudioMediaEncoder.Config(mResult.audioBitRate));
}
// Engine
mEncoderEngine = new MediaEncoderEngine(mResult.file, videoEncoder, audioEncoder,
mResult.maxDuration, mResult.maxSize, SnapshotVideoRecorder.this);
mEncoderEngine.start();
@ -104,15 +110,14 @@ class SnapshotVideoRecorder extends VideoRecorder implements GlCameraPreview.Ren
}
if (mCurrentState == STATE_RECORDING) {
TextureMediaEncoder.Frame frame = new TextureMediaEncoder.Frame();
frame.timestamp = surfaceTexture.getTimestamp();
frame.transform = new float[16]; // TODO would be cool to avoid this at every frame. But it's not easy.
frame.overlayTransform = new float[16];
surfaceTexture.getTransformMatrix(frame.transform);
TextureMediaEncoder textureEncoder = (TextureMediaEncoder) mEncoderEngine.getVideoEncoder();
TextureMediaEncoder.TextureFrame textureFrame = textureEncoder.acquireFrame();
textureFrame.timestamp = surfaceTexture.getTimestamp();
surfaceTexture.getTransformMatrix(textureFrame.transform);
if (mWithOverlay && overlaySurfaceTexture != null) {
overlaySurfaceTexture.getTransformMatrix(frame.overlayTransform);
overlaySurfaceTexture.getTransformMatrix(textureFrame.overlayTransform);
}
mEncoderEngine.notify(TextureMediaEncoder.FRAME_EVENT, frame);
mEncoderEngine.notify(TextureMediaEncoder.FRAME_EVENT, textureFrame);
}
if (mCurrentState == STATE_RECORDING && mDesiredState == STATE_NOT_RECORDING) {
@ -125,7 +130,6 @@ class SnapshotVideoRecorder extends VideoRecorder implements GlCameraPreview.Ren
}
@EncoderThread
@Override
public void onEncoderStop(int stopReason, @Nullable Exception e) {
// If something failed, undo the result, since this is the mechanism

@ -22,6 +22,7 @@ abstract class VideoRecorder {
abstract void stop();
@SuppressWarnings("WeakerAccess")
protected void dispatchResult() {
if (mListener != null) {
mListener.onVideoResult(mResult);

@ -25,6 +25,7 @@ public class VideoResult {
int rotation;
Size size;
File file;
Facing facing;
VideoCodec codec;
Audio audio;
long maxSize;
@ -87,6 +88,16 @@ public class VideoResult {
return file;
}
/**
* Returns the facing value with which this video was recorded.
*
* @return the Facing of this video
*/
@NonNull
public Facing getFacing() {
return facing;
}
/**
* Returns the codec that was used to encode the video frames.
*

@ -25,6 +25,9 @@
<attr name="cameraVideoBitRate" format="integer|reference" />
<attr name="cameraAudioBitRate" format="integer|reference" />
<attr name="cameraSnapshotMaxWidth" format="integer|reference" />
<attr name="cameraSnapshotMaxHeight" format="integer|reference" />
<attr name="cameraGestureTap" format="enum">
<enum name="none" value="0" />
<enum name="focus" value="1" />

@ -2,6 +2,8 @@ package com.otaliastudios.cameraview;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Looper;
import androidx.annotation.NonNull;
import java.lang.ref.WeakReference;
@ -63,16 +65,22 @@ class WorkerHandler {
}
@NonNull
public Thread getThread() {
public HandlerThread getThread() {
return mThread;
}
@NonNull
public Looper getLooper() {
return mThread.getLooper();
}
static void destroy() {
for (String key : sCache.keySet()) {
WeakReference<WorkerHandler> ref = sCache.get(key);
WorkerHandler handler = ref.get();
if (handler != null && handler.getThread().isAlive()) {
handler.getThread().interrupt();
// handler.getThread().quit();
}
ref.clear();
}

@ -6,6 +6,13 @@ import androidx.annotation.Nullable;
import android.view.View;
import android.view.ViewGroup;
/**
* A CameraPreview takes in input stream from the {@link CameraController}, and streams it
* into an output surface that belongs to the view hierarchy.
*
* @param <T> the type of view which hosts the content surface
* @param <Output> the type of output, either {@link android.view.SurfaceHolder} or {@link android.graphics.SurfaceTexture}
*/
abstract class CameraPreview<T extends View, Output> {
protected final static CameraLogger LOG = CameraLogger.create(CameraPreview.class.getSimpleName());
@ -67,8 +74,8 @@ abstract class CameraPreview<T extends View, Output> {
// As far as I can see, these are the actual preview dimensions, as set in CameraParameters.
// This is called by the CameraImpl.
// These must be alredy rotated, if needed, to be consistent with surface/view sizes.
void setInputStreamSize(int width, int height, boolean wasFlipped) {
LOG.i("setInputStreamSize:", "desiredW=", width, "desiredH=", height);
void setStreamSize(int width, int height, boolean wasFlipped) {
LOG.i("setStreamSize:", "desiredW=", width, "desiredH=", height);
mInputStreamWidth = width;
mInputStreamHeight = height;
mInputFlipped = wasFlipped;
@ -78,12 +85,12 @@ abstract class CameraPreview<T extends View, Output> {
}
@NonNull
final Size getInputStreamSize() {
final Size getStreamSize() {
return new Size(mInputStreamWidth, mInputStreamHeight);
}
@NonNull
final Size getOutputSurfaceSize() {
final Size getSurfaceSize() {
return new Size(mOutputSurfaceWidth, mOutputSurfaceHeight);
}
@ -97,8 +104,8 @@ abstract class CameraPreview<T extends View, Output> {
@SuppressWarnings("WeakerAccess")
protected final void dispatchOnOutputSurfaceAvailable(int width, int height) {
LOG.i("dispatchOnOutputSurfaceAvailable:", "w=", width, "h=", height);
protected final void dispatchOnSurfaceAvailable(int width, int height) {
LOG.i("dispatchOnSurfaceAvailable:", "w=", width, "h=", height);
mOutputSurfaceWidth = width;
mOutputSurfaceHeight = height;
if (mOutputSurfaceWidth > 0 && mOutputSurfaceHeight > 0) {
@ -111,8 +118,8 @@ abstract class CameraPreview<T extends View, Output> {
// As far as I can see, these are the view/surface dimensions.
// This is called by subclasses.
@SuppressWarnings("WeakerAccess")
protected final void dispatchOnOutputSurfaceSizeChanged(int width, int height) {
LOG.i("dispatchOnOutputSurfaceSizeChanged:", "w=", width, "h=", height);
protected final void dispatchOnSurfaceSizeChanged(int width, int height) {
LOG.i("dispatchOnSurfaceSizeChanged:", "w=", width, "h=", height);
if (width != mOutputSurfaceWidth || height != mOutputSurfaceHeight) {
mOutputSurfaceWidth = width;
mOutputSurfaceHeight = height;
@ -124,7 +131,7 @@ abstract class CameraPreview<T extends View, Output> {
}
@SuppressWarnings("WeakerAccess")
protected final void dispatchOnOutputSurfaceDestroyed() {
protected final void dispatchOnSurfaceDestroyed() {
mOutputSurfaceWidth = 0;
mOutputSurfaceHeight = 0;
mSurfaceCallback.onSurfaceDestroyed();

@ -93,7 +93,7 @@ class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture> imple
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
dispatchOnOutputSurfaceDestroyed();
dispatchOnSurfaceDestroyed();
mDispatched = false;
}
});
@ -193,7 +193,7 @@ class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture> imple
@Override
public void onSurfaceChanged(GL10 gl, final int width, final int height) {
if (!mDispatched) {
dispatchOnOutputSurfaceAvailable(width, height);
dispatchOnSurfaceAvailable(width, height);
mDispatched = true;
} else if (mOutputSurfaceWidth == width && mOutputSurfaceHeight == height) {
// I was experimenting and this was happening.
@ -202,13 +202,13 @@ class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture> imple
// With other CameraPreview implementation we could just dispatch the 'size changed' event
// to the controller and everything would go straight. In case of GL, apparently we have to
// force recreate the EGLContext by calling onPause and onResume in the UI thread.
dispatchOnOutputSurfaceDestroyed();
dispatchOnSurfaceDestroyed();
getView().post(new Runnable() {
@Override
public void run() {
getView().onPause();
getView().onResume();
dispatchOnOutputSurfaceAvailable(width, height);
dispatchOnSurfaceAvailable(width, height);
}
});
}

@ -44,17 +44,17 @@ class SurfaceCameraPreview extends CameraPreview<SurfaceView, SurfaceHolder> {
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
LOG.i("callback:", "surfaceChanged", "w:", width, "h:", height, "dispatched:", mDispatched);
if (!mDispatched) {
dispatchOnOutputSurfaceAvailable(width, height);
dispatchOnSurfaceAvailable(width, height);
mDispatched = true;
} else {
dispatchOnOutputSurfaceSizeChanged(width, height);
dispatchOnSurfaceSizeChanged(width, height);
}
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
LOG.i("callback:", "surfaceDestroyed");
dispatchOnOutputSurfaceDestroyed();
dispatchOnSurfaceDestroyed();
mDispatched = false;
}
});

@ -28,17 +28,17 @@ class TextureCameraPreview extends CameraPreview<TextureView, SurfaceTexture> {
@Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
dispatchOnOutputSurfaceAvailable(width, height);
dispatchOnSurfaceAvailable(width, height);
}
@Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
dispatchOnOutputSurfaceSizeChanged(width, height);
dispatchOnSurfaceSizeChanged(width, height);
}
@Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
dispatchOnOutputSurfaceDestroyed();
dispatchOnSurfaceDestroyed();
return true;
}
@ -70,8 +70,8 @@ class TextureCameraPreview extends CameraPreview<TextureView, SurfaceTexture> {
@TargetApi(15)
@Override
void setInputStreamSize(int width, int height, boolean wasFlipped) {
super.setInputStreamSize(width, height, wasFlipped);
void setStreamSize(int width, int height, boolean wasFlipped) {
super.setStreamSize(width, height, wasFlipped);
if (getView().getSurfaceTexture() != null) {
getView().getSurfaceTexture().setDefaultBufferSize(width, height);
}

@ -1,12 +1,12 @@
coverage:
precision: 1
round: down
range: "40...100"
range: "30...70"
status:
project:
default:
target: 45%
target: 40%
patch:
default:
target: 70%

@ -23,6 +23,6 @@ android {
dependencies {
implementation project(':cameraview')
implementation 'androidx.appcompat:appcompat:1.1.0-alpha01'
implementation 'com.google.android.material:material:1.1.0-alpha02'
implementation 'androidx.appcompat:appcompat:1.1.0-alpha02'
implementation 'com.google.android.material:material:1.1.0-alpha03'
}

@ -15,9 +15,9 @@
<activity
android:name=".CameraActivity"
android:theme="@style/Theme.MainActivity"
android:hardwareAccelerated="true"
android:configChanges="orientation|screenLayout|keyboardHidden"
android:screenOrientation="portrait">
android:screenOrientation="portrait"
android:hardwareAccelerated="true">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />

@ -2,6 +2,7 @@ package com.otaliastudios.cameraview.demo;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.net.Uri;
import android.os.Bundle;
import androidx.annotation.NonNull;
import com.google.android.material.bottomsheet.BottomSheetBehavior;

@ -2,8 +2,11 @@ package com.otaliastudios.cameraview.demo;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.os.Bundle;
import androidx.annotation.Nullable;
import android.util.Log;
import android.widget.ImageView;
import com.otaliastudios.cameraview.AspectRatio;
@ -45,6 +48,18 @@ public class PicturePreviewActivity extends Activity {
imageView.setImageBitmap(bitmap);
}
});
if (result.isSnapshot()) {
// Log the real size for debugging reason.
BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
BitmapFactory.decodeByteArray(result.getData(), 0, result.getData().length, options);
if (result.getRotation() % 180 != 0) {
Log.e("PicturePreview", "The picture full size is " + result.getSize().getHeight() + "x" + result.getSize().getWidth());
} else {
Log.e("PicturePreview", "The picture full size is " + result.getSize().getWidth() + "x" + result.getSize().getHeight());
}
}
}
@Override

@ -5,6 +5,8 @@ import android.media.MediaPlayer;
import android.net.Uri;
import android.os.Bundle;
import androidx.annotation.Nullable;
import android.util.Log;
import android.view.View;
import android.view.ViewGroup;
import android.widget.MediaController;
@ -73,6 +75,11 @@ public class VideoPreviewActivity extends Activity {
lp.height = (int) (viewWidth * (videoHeight / videoWidth));
videoView.setLayoutParams(lp);
playVideo();
if (result.isSnapshot()) {
// Log the real size for debugging reason.
Log.e("VideoPreview", "The video full size is " + videoWidth + "x" + videoHeight);
}
}
});
}

@ -9,9 +9,11 @@ date: 2018-12-20 22:07:22
disqus: 1
---
If you are planning to use the snapshot APIs, the size of the media output is that of the preview,
accounting for any cropping made when [measuring the view](preview-size.html).
If you are planning to use the standard APIs for capturing, then what follows applies.
If you are planning to use the snapshot APIs, the size of the media output is that of the preview stream,
accounting for any cropping made when [measuring the view](preview-size.html) and other constraints.
Please read the [Snapshot Size](snapshot-size.html) document.
If you are planning to use the standard APIs, then what follows applies.
### Controlling Size

@ -46,8 +46,8 @@ resulting snapshots are square as well, no matter what the sensor available size
|------|-----|-------|--------------------------|------------------------|---------|-----------|
|`takePicture()`|Pictures|Standard|`yes`|`no`|`no`|That of `setPictureSize`|
|`takeVideo(File)`|Videos|Standard|`no`|`yes`|`no`|That of `setVideoSize`|
|`takePictureSnapshot()`|Pictures|Snapshot|`yes`|`yes`|`yes`|That of the view|
|`takeVideoSnapshot(File)`|Videos|Snapshot|`yes`|`yes`|`yes`|That of the view|
|`takePictureSnapshot()`|Pictures|Snapshot|`yes`|`yes`|`yes`|That of the preview stream, [or less](snapshot-size.html)|
|`takeVideoSnapshot(File)`|Videos|Snapshot|`yes`|`yes`|`yes`|That of the preview stream, [or less](snapshot-size.html)|
Please note that the video snaphot features requires:

@ -8,6 +8,22 @@ order: 3
New versions are released through GitHub, so the reference page is the [GitHub Releases](https://github.com/natario1/CameraView/releases) page.
### v2.0.0-beta03
- Fixed a few bugs ([#392][392])
- Important fixes to video snapshot recording ([#374][374])
### v2.0.0-beta02
- Fixed important bugs ([#356][356])
- Picture snapshots are now flipped when front camera is used ([#360][360])
- Added `PictureResult.getFacing()` and `VideoResult.getFacing()` ([#360][360])
### v2.0.0-beta01
This is the first beta release. For changes with respect to v1, please take a look at the [migration guide](../extra/v1-migration-guide.html).
[356]: https://github.com/natario1/CameraView/pull/356
[360]: https://github.com/natario1/CameraView/pull/360
[374]: https://github.com/natario1/CameraView/pull/374
[392]: https://github.com/natario1/CameraView/pull/392

@ -2,7 +2,7 @@
layout: page
title: "Debugging"
category: docs
order: 10
order: 12
date: 2018-12-20 20:02:38
disqus: 1
---

@ -2,7 +2,7 @@
layout: page
title: "Error Handling"
category: docs
order: 9
order: 11
date: 2018-12-20 20:02:31
disqus: 1
---

@ -24,7 +24,7 @@ allprojects {
Then simply download the latest version:
```groovy
api 'com.otaliastudios:cameraview:2.0.0-beta01'
api 'com.otaliastudios:cameraview:2.0.0-beta03'
```
No other configuration steps are needed.

@ -4,7 +4,7 @@ title: "More features"
subtitle: "Undocumented features & more"
description: "Undocumented features & more"
category: docs
order: 11
order: 13
date: 2018-12-20 20:41:20
disqus: 1
---

@ -55,13 +55,13 @@ This means that part of the preview might be hidden, and the output might contai
that were not visible during the capture, **unless it is taken as a snapshot, since snapshots account for cropping**.
## Advanced feature: Preview Size Selection
## Advanced feature: Preview Stream Size Selection
**Only do this if you know what you are doing. This is typically not needed - prefer picture/video size selectors,
as they will drive the preview size selection and, eventually, the view size. If what you want is just
as they will drive the preview stream size selection and, eventually, the view size. If what you want is just
choose an aspect ratio, do so with [Capture Size](capture-size.html) selection.**
As said, `WRAP_CONTENT` adapts the view boundaries to the preview size. The preview size must be determined
As said, `WRAP_CONTENT` adapts the view boundaries to the preview stream size. The preview stream size must be determined
based on the sizes that the device sensor & hardware actually support. This operation is done automatically
by the engine. The default selector will do the following:
@ -70,10 +70,10 @@ by the engine. The default selector will do the following:
- Try to match both, or just one, or fallback to the biggest available size
There are not so many reason why you would replace this, other than control the frame processor size
or, indirectly, the snapshot size. You can, however, hook into the process using `setPreviewSize(SizeSelector)`:
or, indirectly, the snapshot size. You can, however, hook into the process using `setPreviewStreamSize(SizeSelector)`:
```java
cameraView.setPreviewSize(new SizeSelector() {
cameraView.setPreviewStreamSize(new SizeSelector() {
@Override
public List<Size> select(List<Size> source) {
// Receives a list of available sizes.
@ -82,7 +82,7 @@ cameraView.setPreviewSize(new SizeSelector() {
});
```
After the preview size is determined, if it has changed since list time, the `CameraView` will receive
After the preview stream size is determined, if it has changed since list time, the `CameraView` will receive
another call to `onMeasure` so the `WRAP_CONTENT` magic can take place.
To understand how SizeSelectors work and the available utilities, please read the [Capture Size](capture-size.html) document.

@ -4,7 +4,7 @@ title: "Runtime Permissions"
subtitle: "Permissions and Manifest setup"
description: "Permissions and Manifest setup"
category: docs
order: 8
order: 10
date: 2018-12-20 20:03:03
disqus: 1
---
@ -41,8 +41,9 @@ device has cameras, and then start the camera view.
On Marshmallow+, the user must explicitly approve our permissions. You can
- handle permissions yourself and then call `cameraView.start()` once they are acquired
- or call `cameraView.start()` anyway: `CameraView` will present a permission request to the user based on
- handle permissions yourself and then call `open()` or `setLifecycleOwner()` once they are acquired
- ignore this: `CameraView` will present a permission request to the user based on
whether they are needed or not with the current configuration.
In the second case, you should restart the camera if you have a successful response from `onRequestPermissionResults()`.
Note however, that this is done at the activity level, so the permission request callback
`onRequestPermissionResults()` will be invoked on the parent activity, not the fragment.

@ -0,0 +1,58 @@
---
layout: page
title: "Snapshot Size"
subtitle: "Sizing the snapshots output"
description: "Sizing the snapshots output"
category: docs
order: 9
date: 2019-02-24 17:36:39
disqus: 1
---
Snapshots are captured from the preview stream instead of using a separate capture channel.
They are extremely fast, small in size, and give you a low-quality output that can be easily
uploaded or processed.
The snapshot size is based on the size of the preview stream, which is described in the [Preview Size](preview-size.html) document.
Although the preview stream size is customizable, note that this is considered an advanced feature,
as the best preview stream size selector already does a good job for the vast majority of use cases.
When taking snapshots, the preview stream size is then changed to match some constraints.
### Matching the preview ratio
Snapshots will automatically be cropped to match the preview aspect ratio. This means that if your
preview is square, you can finally take a square picture or video, regardless of the available sensor sizes.
Take a look at the [Preview Size](preview-size.html) document to learn about preview sizing.
### Other constraints
You can refine the size further by applying `maxWidth` and a `maxHeight` constraints:
```java
cameraView.setSnapshotMaxWidth(500);
cameraView.setSnapshotMaxHeight(500);
```
These values apply to both picture and video snapshots. If the snapshot dimensions exceed these values
(both default `Integer.MAX_VALUE`), the snapshot will be scaled down to match the constraints.
This is very useful as it decouples the snapshot size logic from the preview. By using small constraints,
you can have a pleasant, good looking preview stream, while still capturing fast, low-res snapshots
with no issues.
### XML Attributes
```xml
<com.otaliastudios.cameraview.CameraView
app:cameraSnapshotMaxWidth="500"
app:cameraSnapshotMaxHeight="500"/>
```
### Related APIs
|Method|Description|
|------|-----------|
|`setSnapshotMaxWidth(int)`|Sets the max width for snapshots. If out of bounds, the output will be scaled down.|
|`setSnapshotMaxHeight(int)`|Sets the max height for snapshots. If out of bounds, the output will be scaled down.|
Loading…
Cancel
Save