Improve video encoding (#506)

* Reorder code and add long comments

* Simplify encoders Config

* Fix Audio recording bugs

* Anticipate max length detection

* Anticipate even more

* Estimate video bit rate instead of ugly default

* Fix bugs, better logs and comments

* Fix long standing sync bug

* Make inner classes public

* Remove performance logging code

* Add Audio.MONO and Audio.STEREO

* Add mono and stereo in attrs

* Write zeros when we have gaps

* Improve comments

* Add performance flags

* Move configs to separate classes

* Fix stereo bug

* Add onVideoRecordingEnd

* Add changelog notes

* Address some TODOs

* Refactor tests, add PoolTest
pull/513/head
Mattia Iavarone 5 years ago committed by GitHub
parent ea952d1497
commit 6962744d4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 2
      README.md
  2. 79
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/BaseTest.java
  3. 16
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraLoggerTest.java
  4. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraUtilsTest.java
  5. 107
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraViewCallbacksTest.java
  6. 42
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/CameraViewTest.java
  7. 65
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/engine/CameraIntegrationTest.java
  8. 9
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/engine/MockCameraEngine.java
  9. 2
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/gesture/GestureFinderTest.java
  10. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/gesture/PinchGestureFinderTest.java
  11. 7
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/gesture/ScrollGestureFinderTest.java
  12. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/gesture/TapGestureFinderTest.java
  13. 2
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/internal/GridLinesLayoutTest.java
  14. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/internal/utils/CropHelperTest.java
  15. 11
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/internal/utils/OrientationHelperTest.java
  16. 3
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/markers/MarkerLayoutTest.java
  17. 31
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/overlay/OverlayLayoutTest.java
  18. 1
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/picture/PictureRecorderTest.java
  19. 5
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/preview/CameraPreviewTest.java
  20. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/preview/SurfaceCameraPreviewTest.java
  21. 4
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/preview/TextureCameraPreviewTest.java
  22. 10
      cameraview/src/androidTest/java/com/otaliastudios/cameraview/video/VideoRecorderTest.java
  23. 21
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraListener.java
  24. 27
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraLogger.java
  25. 15
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraOptions.java
  26. 21
      cameraview/src/main/java/com/otaliastudios/cameraview/CameraView.java
  27. 18
      cameraview/src/main/java/com/otaliastudios/cameraview/controls/Audio.java
  28. 5
      cameraview/src/main/java/com/otaliastudios/cameraview/engine/CameraEngine.java
  29. 4
      cameraview/src/main/java/com/otaliastudios/cameraview/gesture/GestureFinder.java
  30. 79
      cameraview/src/main/java/com/otaliastudios/cameraview/internal/utils/Pool.java
  31. 19
      cameraview/src/main/java/com/otaliastudios/cameraview/preview/GlCameraPreview.java
  32. 17
      cameraview/src/main/java/com/otaliastudios/cameraview/video/FullVideoRecorder.java
  33. 66
      cameraview/src/main/java/com/otaliastudios/cameraview/video/SnapshotVideoRecorder.java
  34. 23
      cameraview/src/main/java/com/otaliastudios/cameraview/video/VideoRecorder.java
  35. 96
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/AudioConfig.java
  36. 448
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/AudioMediaEncoder.java
  37. 105
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/AudioTimestamp.java
  38. 15
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/InputBuffer.java
  39. 305
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/MediaEncoder.java
  40. 109
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/MediaEncoderEngine.java
  41. 9
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/OutputBuffer.java
  42. 38
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/TextureConfig.java
  43. 146
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/TextureMediaEncoder.java
  44. 25
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/VideoConfig.java
  45. 61
      cameraview/src/main/java/com/otaliastudios/cameraview/video/encoding/VideoMediaEncoder.java
  46. 2
      cameraview/src/main/res/values/attrs.xml
  47. 168
      cameraview/src/test/java/com/otaliastudios/cameraview/internal/utils/PoolTest.java
  48. 21
      demo/src/main/java/com/otaliastudios/cameraview/demo/CameraActivity.java
  49. 2
      demo/src/main/res/layout/activity_camera.xml
  50. 2
      docs/_posts/2018-12-20-camera-events.md
  51. 31
      docs/_posts/2018-12-20-capturing-media.md
  52. 6
      docs/_posts/2018-12-20-changelog.md
  53. 46
      docs/_posts/2018-12-20-controls.md
  54. 32
      docs/_posts/2018-12-20-more-features.md

@ -95,7 +95,7 @@ motivation boost to push the library forward.
app:cameraFlash="on|auto|torch|off"
app:cameraWhiteBalance="auto|cloudy|daylight|fluorescent|incandescent"
app:cameraMode="picture|video"
app:cameraAudio="on|off"
app:cameraAudio="on|off|mono|stereo"
app:cameraGrid="draw3x3|draw4x4|drawPhi|off"
app:cameraGridColor="@color/grid_color"
app:cameraPlaySounds="true|false"

@ -8,11 +8,9 @@ import android.os.Handler;
import android.os.Looper;
import android.os.PowerManager;
import androidx.annotation.NonNull;
import androidx.test.platform.app.InstrumentationRegistry;
import android.util.Log;
import android.view.View;
import com.otaliastudios.cameraview.internal.utils.Op;
import org.junit.After;
@ -27,9 +25,7 @@ import java.util.concurrent.CountDownLatch;
import static android.content.Context.KEYGUARD_SERVICE;
import static android.content.Context.POWER_SERVICE;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
public class BaseTest {
@ -38,17 +34,16 @@ public class BaseTest {
// https://github.com/linkedin/test-butler/blob/bc2bb4df13d0a554d2e2b0ea710795017717e710/test-butler-app/src/main/java/com/linkedin/android/testbutler/ButlerService.java#L121
@BeforeClass
@SuppressWarnings("MissingPermission")
public static void wakeUp() {
public static void beforeClass_wakeUp() {
CameraLogger.setLogLevel(CameraLogger.LEVEL_VERBOSE);
// Acquire a keyguard lock to prevent the lock screen from randomly appearing and breaking tests
KeyguardManager keyguardManager = (KeyguardManager) context().getSystemService(KEYGUARD_SERVICE);
KeyguardManager keyguardManager = (KeyguardManager) getContext().getSystemService(KEYGUARD_SERVICE);
keyguardLock = keyguardManager.newKeyguardLock("CameraViewLock");
keyguardLock.disableKeyguard();
// Acquire a wake lock to prevent the cpu from going to sleep and breaking tests
PowerManager powerManager = (PowerManager) context().getSystemService(POWER_SERVICE);
PowerManager powerManager = (PowerManager) getContext().getSystemService(POWER_SERVICE);
wakeLock = powerManager.newWakeLock(PowerManager.FULL_WAKE_LOCK
| PowerManager.ACQUIRE_CAUSES_WAKEUP
| PowerManager.ON_AFTER_RELEASE, "CameraViewLock");
@ -56,8 +51,9 @@ public class BaseTest {
}
@AfterClass
@SuppressWarnings("MissingPermission")
public static void releaseWakeUp() {
public static void afterClass_releaseWakeUp() {
CameraLogger.setLogLevel(CameraLogger.LEVEL_ERROR);
wakeLock.release();
keyguardLock.reenableKeyguard();
}
@ -66,61 +62,48 @@ public class BaseTest {
* This will make mockito report the error when it should.
* Mockito reports failure on the next mockito invocation, which is terrible
* since it might be on the next test or even never happen.
*
* Calling this
*/
@After
public void syncMockito() {
public void after_checkMockito() {
Object object = Mockito.mock(Object.class);
//noinspection ResultOfMethodCallIgnored
object.toString();
}
public static void ui(Runnable runnable) {
InstrumentationRegistry.getInstrumentation().runOnMainSync(runnable);
}
public static void uiAsync(Runnable runnable) {
new Handler(Looper.getMainLooper()).post(runnable);
}
public static Context context() {
@NonNull
protected static Context getContext() {
return InstrumentationRegistry.getInstrumentation().getContext();
}
public static void uiRequestLayout(final View view) {
ui(new Runnable() {
@Override
public void run() {
view.requestLayout();
}
});
protected static void uiSync(Runnable runnable) {
InstrumentationRegistry.getInstrumentation().runOnMainSync(runnable);
}
public static void idle() {
InstrumentationRegistry.getInstrumentation().waitForIdleSync();
@SuppressWarnings("unused")
protected static void uiAsync(Runnable runnable) {
new Handler(Looper.getMainLooper()).post(runnable);
}
public static void sleep(long time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
e.printStackTrace();
}
@SuppressWarnings("unused")
protected static void waitUiIdle() {
InstrumentationRegistry.getInstrumentation().waitForIdleSync();
}
public static void grantPermissions() {
protected static void grantAllPermissions() {
grantPermission("android.permission.CAMERA");
grantPermission("android.permission.RECORD_AUDIO");
grantPermission("android.permission.WRITE_EXTERNAL_STORAGE");
}
public static void grantPermission(String permission) {
@SuppressWarnings("WeakerAccess")
protected static void grantPermission(@NonNull String permission) {
if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) return;
String command = "pm grant " + context().getPackageName() + " " + permission;
String command = "pm grant " + getContext().getPackageName() + " " + permission;
InstrumentationRegistry.getInstrumentation().getUiAutomation().executeShellCommand(command);
}
public static Stubber doCountDown(final CountDownLatch latch) {
@NonNull
protected static Stubber doCountDown(final CountDownLatch latch) {
return doAnswer(new Answer<Object>() {
@Override
public Object answer(InvocationOnMock invocation) {
@ -130,22 +113,24 @@ public class BaseTest {
});
}
public static <T> Stubber doEndTask(final Op<T> op, final T response) {
@NonNull
protected static <T> Stubber doEndOp(final Op<T> op, final T response) {
return doAnswer(new Answer<Object>() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
public Object answer(InvocationOnMock invocation) {
op.end(response);
return null;
}
});
}
public static Stubber doEndTask(final Op op, final int withReturnArgument) {
@NonNull
protected static <T> Stubber doEndOp(final Op<T> op, final int withReturnArgument) {
return doAnswer(new Answer<Object>() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
Object o = invocation.getArguments()[withReturnArgument];
public Object answer(InvocationOnMock invocation) {
//noinspection unchecked
T o = (T) invocation.getArguments()[withReturnArgument];
op.end(o);
return null;
}

@ -10,8 +10,6 @@ import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import static org.junit.Assert.*;
import static org.mockito.Mockito.*;
@ -26,11 +24,13 @@ public class CameraLoggerTest extends BaseTest {
@Before
public void setUp() {
CameraLogger.setLogLevel(CameraLogger.LEVEL_VERBOSE);
CameraLogger.unregisterLogger(CameraLogger.sAndroidLogger); // Avoid writing into Logs during these tests
logger = CameraLogger.create(loggerTag);
}
@After
public void tearDown() {
CameraLogger.registerLogger(CameraLogger.sAndroidLogger);
logger = null;
}
@ -110,15 +110,9 @@ public class CameraLoggerTest extends BaseTest {
CameraLogger.registerLogger(mock);
final Op<Throwable> op = new Op<>();
doAnswer(new Answer() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
Object[] args = invocation.getArguments();
Throwable throwable = (Throwable) args[3];
op.end(throwable);
return null;
}
}).when(mock).log(anyInt(), anyString(), anyString(), any(Throwable.class));
doEndOp(op, 3)
.when(mock)
.log(anyInt(), anyString(), anyString(), any(Throwable.class));
op.listen();
logger.e("Got no error.");

@ -56,7 +56,7 @@ public class CameraUtilsTest extends BaseTest {
};
// Run on ui because it involves handlers.
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
if (maxWidth > 0 && maxHeight > 0) {
@ -84,8 +84,6 @@ public class CameraUtilsTest extends BaseTest {
assertEquals(0, other.getPixel(0, h-1));
assertEquals(0, other.getPixel(w-1, 0));
assertEquals(0, other.getPixel(w-1, h-1));
// TODO: improve when we add EXIF writing to byte arrays
}

@ -29,19 +29,14 @@ import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import org.mockito.stubbing.Stubber;
import static junit.framework.Assert.assertNotNull;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.mockito.ArgumentMatchers.nullable;
import static org.mockito.Matchers.any;
import static org.mockito.Matchers.anyFloat;
import static org.mockito.Matchers.anyInt;
import static org.mockito.Matchers.eq;
import static org.mockito.Mockito.doAnswer;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.times;
@ -54,6 +49,8 @@ import static org.mockito.Mockito.verify;
@MediumTest
public class CameraViewCallbacksTest extends BaseTest {
private final static long DELAY = 500;
private CameraView camera;
private CameraListener listener;
private FrameProcessor processor;
@ -63,10 +60,10 @@ public class CameraViewCallbacksTest extends BaseTest {
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
Context context = context();
Context context = getContext();
listener = mock(CameraListener.class);
processor = mock(FrameProcessor.class);
camera = new CameraView(context) {
@ -106,99 +103,101 @@ public class CameraViewCallbacksTest extends BaseTest {
listener = null;
}
// Completes our op.
private Stubber completeTask() {
return doAnswer(new Answer() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
op.end(true);
return null;
}
});
}
@Test
public void testDontDispatchIfRemoved() {
camera.removeCameraListener(listener);
completeTask().when(listener).onCameraOpened(null);
camera.mCameraCallbacks.dispatchOnCameraOpened(null);
CameraOptions options = mock(CameraOptions.class);
doEndOp(op, true).when(listener).onCameraOpened(options);
camera.mCameraCallbacks.dispatchOnCameraOpened(options);
assertNull(op.await(500));
verify(listener, never()).onCameraOpened(null);
assertNull(op.await(DELAY));
verify(listener, never()).onCameraOpened(options);
}
@Test
public void testDontDispatchIfCleared() {
camera.clearCameraListeners();
completeTask().when(listener).onCameraOpened(null);
camera.mCameraCallbacks.dispatchOnCameraOpened(null);
CameraOptions options = mock(CameraOptions.class);
doEndOp(op, true).when(listener).onCameraOpened(options);
camera.mCameraCallbacks.dispatchOnCameraOpened(options);
assertNull(op.await(500));
verify(listener, never()).onCameraOpened(null);
assertNull(op.await(DELAY));
verify(listener, never()).onCameraOpened(options);
}
@Test
public void testDispatchOnCameraOpened() {
completeTask().when(listener).onCameraOpened(null);
camera.mCameraCallbacks.dispatchOnCameraOpened(null);
CameraOptions options = mock(CameraOptions.class);
doEndOp(op, true).when(listener).onCameraOpened(options);
camera.mCameraCallbacks.dispatchOnCameraOpened(options);
assertNotNull(op.await(500));
verify(listener, times(1)).onCameraOpened(null);
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onCameraOpened(options);
}
@Test
public void testDispatchOnCameraClosed() {
completeTask().when(listener).onCameraClosed();
doEndOp(op, true).when(listener).onCameraClosed();
camera.mCameraCallbacks.dispatchOnCameraClosed();
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onCameraClosed();
}
@Test
public void testDispatchOnVideoRecordingStart() {
completeTask().when(listener).onVideoRecordingStart();
doEndOp(op, true).when(listener).onVideoRecordingStart();
camera.mCameraCallbacks.dispatchOnVideoRecordingStart();
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onVideoRecordingStart();
}
@Test
public void testDispatchOnVideoRecordingEnd() {
doEndOp(op, true).when(listener).onVideoRecordingEnd();
camera.mCameraCallbacks.dispatchOnVideoRecordingEnd();
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onVideoRecordingEnd();
}
@Test
public void testDispatchOnVideoTaken() {
VideoResult.Stub stub = new VideoResult.Stub();
completeTask().when(listener).onVideoTaken(any(VideoResult.class));
doEndOp(op, true).when(listener).onVideoTaken(any(VideoResult.class));
camera.mCameraCallbacks.dispatchOnVideoTaken(stub);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onVideoTaken(any(VideoResult.class));
}
@Test
public void testDispatchOnPictureTaken() {
PictureResult.Stub stub = new PictureResult.Stub();
completeTask().when(listener).onPictureTaken(any(PictureResult.class));
doEndOp(op, true).when(listener).onPictureTaken(any(PictureResult.class));
camera.mCameraCallbacks.dispatchOnPictureTaken(stub);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onPictureTaken(any(PictureResult.class));
}
@Test
public void testDispatchOnZoomChanged() {
completeTask().when(listener).onZoomChanged(eq(0f), eq(new float[]{0, 1}), nullable(PointF[].class));
doEndOp(op, true).when(listener).onZoomChanged(eq(0f), eq(new float[]{0, 1}), nullable(PointF[].class));
camera.mCameraCallbacks.dispatchOnZoomChanged(0f, null);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onZoomChanged(eq(0f), eq(new float[]{0, 1}), nullable(PointF[].class));
}
@Test
public void testDispatchOnExposureCorrectionChanged() {
completeTask().when(listener).onExposureCorrectionChanged(0f, null, null);
camera.mCameraCallbacks.dispatchOnExposureCorrectionChanged(0f, null, null);
float[] bounds = new float[]{};
doEndOp(op, true).when(listener).onExposureCorrectionChanged(0f, bounds, null);
camera.mCameraCallbacks.dispatchOnExposureCorrectionChanged(0f, bounds, null);
assertNotNull(op.await(500));
verify(listener, times(1)).onExposureCorrectionChanged(0f, null, null);
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onExposureCorrectionChanged(0f, bounds, null);
}
@Test
@ -212,10 +211,10 @@ public class CameraViewCallbacksTest extends BaseTest {
camera.mMarkerLayout = markerLayout;
PointF point = new PointF();
completeTask().when(listener).onAutoFocusStart(point);
doEndOp(op, true).when(listener).onAutoFocusStart(point);
camera.mCameraCallbacks.dispatchOnFocusStart(Gesture.TAP, point);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onAutoFocusStart(point);
verify(marker, times(1)).onAutoFocusStart(AutoFocusTrigger.GESTURE, point);
verify(markerLayout, times(1)).onEvent(eq(MarkerLayout.TYPE_AUTOFOCUS), any(PointF[].class));
@ -231,10 +230,10 @@ public class CameraViewCallbacksTest extends BaseTest {
PointF point = new PointF();
boolean success = true;
completeTask().when(listener).onAutoFocusEnd(success, point);
doEndOp(op, true).when(listener).onAutoFocusEnd(success, point);
camera.mCameraCallbacks.dispatchOnFocusEnd(Gesture.TAP, success, point);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onAutoFocusEnd(success, point);
verify(marker, times(1)).onAutoFocusEnd(AutoFocusTrigger.GESTURE, success, point);
@ -243,9 +242,9 @@ public class CameraViewCallbacksTest extends BaseTest {
@Test
public void testOrientationCallbacks() {
completeTask().when(listener).onOrientationChanged(anyInt());
doEndOp(op, true).when(listener).onOrientationChanged(anyInt());
camera.mCameraCallbacks.onDeviceOrientationChanged(90);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onOrientationChanged(anyInt());
}
@ -254,20 +253,20 @@ public class CameraViewCallbacksTest extends BaseTest {
@Test
public void testCameraError() {
CameraException error = new CameraException(new RuntimeException("Error"));
completeTask().when(listener).onCameraError(error);
doEndOp(op, true).when(listener).onCameraError(error);
camera.mCameraCallbacks.dispatchError(error);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(listener, times(1)).onCameraError(error);
}
@Test
public void testProcessFrame() {
Frame mock = mock(Frame.class);
completeTask().when(processor).process(mock);
doEndOp(op, true).when(processor).process(mock);
camera.mCameraCallbacks.dispatchFrame(mock);
assertNotNull(op.await(500));
assertNotNull(op.await(DELAY));
verify(processor, times(1)).process(mock);
}
}

@ -10,7 +10,6 @@ import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.filters.MediumTest;
import android.util.AttributeSet;
import android.view.Gravity;
import android.view.LayoutInflater;
import android.view.MotionEvent;
import android.view.View;
@ -52,9 +51,6 @@ import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import org.w3c.dom.Attr;
import static org.junit.Assert.*;
@ -74,10 +70,10 @@ public class CameraViewTest extends BaseTest {
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
Context context = context();
Context context = getContext();
cameraView = new CameraView(context) {
@NonNull
@ -151,8 +147,8 @@ public class CameraViewTest extends BaseTest {
@Test
public void testDefaults() {
// CameraEngine
TypedArray empty = context().obtainStyledAttributes(new int[]{});
ControlParser controls = new ControlParser(context(), empty);
TypedArray empty = getContext().obtainStyledAttributes(new int[]{});
ControlParser controls = new ControlParser(getContext(), empty);
assertEquals(cameraView.getFlash(), controls.getFlash());
assertEquals(cameraView.getFacing(), controls.getFacing());
assertEquals(cameraView.getGrid(), controls.getGrid());
@ -236,7 +232,7 @@ public class CameraViewTest extends BaseTest {
mockController.setMockCameraOptions(o);
mockController.setMockEngineState(true);
MotionEvent event = MotionEvent.obtain(0L, 0L, 0, 0f, 0f, 0);
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
cameraView.mTapGestureFinder = new TapGestureFinder(cameraView.mCameraCallbacks) {
@ -258,7 +254,7 @@ public class CameraViewTest extends BaseTest {
mockController.setMockCameraOptions(o);
mockController.setMockEngineState(true);
MotionEvent event = MotionEvent.obtain(0L, 0L, 0, 0f, 0f, 0);
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
cameraView.mTapGestureFinder = new TapGestureFinder(cameraView.mCameraCallbacks) {
@ -285,7 +281,7 @@ public class CameraViewTest extends BaseTest {
mockController.mZoomChanged = false;
MotionEvent event = MotionEvent.obtain(0L, 0L, 0, 0f, 0f, 0);
final FactorHolder factor = new FactorHolder();
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
cameraView.mPinchGestureFinder = new PinchGestureFinder(cameraView.mCameraCallbacks) {
@ -326,7 +322,7 @@ public class CameraViewTest extends BaseTest {
mockController.mExposureCorrectionChanged = false;
MotionEvent event = MotionEvent.obtain(0L, 0L, 0, 0f, 0f, 0);
final FactorHolder factor = new FactorHolder();
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
cameraView.mScrollGestureFinder = new ScrollGestureFinder(cameraView.mCameraCallbacks) {
@ -628,6 +624,10 @@ public class CameraViewTest extends BaseTest {
assertEquals(cameraView.get(Audio.class), Audio.ON);
cameraView.set(Audio.OFF);
assertEquals(cameraView.get(Audio.class), Audio.OFF);
cameraView.set(Audio.MONO);
assertEquals(cameraView.get(Audio.class), Audio.MONO);
cameraView.set(Audio.STEREO);
assertEquals(cameraView.get(Audio.class), Audio.STEREO);
}
@Test
@ -681,7 +681,6 @@ public class CameraViewTest extends BaseTest {
//region Lists of listeners and processors
@SuppressWarnings("UseBulkOperation")
@Test
public void testCameraListenerList() {
assertTrue(cameraView.mListeners.isEmpty());
@ -709,7 +708,6 @@ public class CameraViewTest extends BaseTest {
}
}
@SuppressWarnings({"NullableProblems", "UseBulkOperation"})
@Test
public void testFrameProcessorsList() {
assertTrue(cameraView.mFrameProcessors.isEmpty());
@ -771,13 +769,7 @@ public class CameraViewTest extends BaseTest {
final PointF point = new PointF(0, 0);
final PointF[] points = new PointF[]{ point };
final Op<Boolean> op = new Op<>(true);
doAnswer(new Answer() {
@Override
public Object answer(InvocationOnMock invocation) throws Throwable {
op.end(true);
return null;
}
}).when(markerLayout).onEvent(MarkerLayout.TYPE_AUTOFOCUS, points);
doEndOp(op, true).when(markerLayout).onEvent(MarkerLayout.TYPE_AUTOFOCUS, points);
cameraView.mCameraCallbacks.dispatchOnFocusStart(Gesture.TAP, point);
assertNotNull(op.await(100));
}
@ -789,7 +781,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testOverlays_generateLayoutParams() {
cameraView.mOverlayLayout = spy(cameraView.mOverlayLayout);
LayoutInflater inflater = LayoutInflater.from(context());
LayoutInflater inflater = LayoutInflater.from(getContext());
View overlay = inflater.inflate(com.otaliastudios.cameraview.test.R.layout.overlay, cameraView, false);
assertTrue(overlay.getLayoutParams() instanceof OverlayLayout.LayoutParams);
verify(cameraView.mOverlayLayout, times(1)).isOverlay(any(AttributeSet.class));
@ -800,7 +792,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testOverlays_dontGenerateLayoutParams() {
cameraView.mOverlayLayout = spy(cameraView.mOverlayLayout);
LayoutInflater inflater = LayoutInflater.from(context());
LayoutInflater inflater = LayoutInflater.from(getContext());
View overlay = inflater.inflate(com.otaliastudios.cameraview.test.R.layout.not_overlay, cameraView, false);
assertFalse(overlay.getLayoutParams() instanceof OverlayLayout.LayoutParams);
verify(cameraView.mOverlayLayout, times(1)).isOverlay(any(AttributeSet.class));
@ -810,7 +802,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testOverlays_addOverlayView() {
cameraView.mOverlayLayout = spy(cameraView.mOverlayLayout);
View overlay = new View(context());
View overlay = new View(getContext());
OverlayLayout.LayoutParams params = new OverlayLayout.LayoutParams(10, 10);
int count = cameraView.getChildCount();
cameraView.addView(overlay, 0, params);
@ -822,7 +814,7 @@ public class CameraViewTest extends BaseTest {
@Test
public void testOverlays_dontAddOverlayView() {
cameraView.mOverlayLayout = spy(cameraView.mOverlayLayout);
View overlay = new View(context());
View overlay = new View(getContext());
ViewGroup.LayoutParams params = new ViewGroup.LayoutParams(10, 10);
int count = cameraView.getChildCount();
cameraView.addView(overlay, 0, params);

@ -77,7 +77,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
@BeforeClass
public static void grant() {
grantPermissions();
grantAllPermissions();
}
@NonNull
@ -88,7 +88,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
LOG.e("Test started. Setting up camera.");
WorkerHandler.destroy();
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
camera = new CameraView(rule.getActivity()) {
@ -139,12 +139,13 @@ public abstract class CameraIntegrationTest extends BaseTest {
private CameraOptions openSync(boolean expectSuccess) {
camera.open();
final Op<CameraOptions> open = new Op<>(true);
doEndTask(open, 0).when(listener).onCameraOpened(any(CameraOptions.class));
doEndOp(open, 0).when(listener).onCameraOpened(any(CameraOptions.class));
CameraOptions result = open.await(DELAY);
if (expectSuccess) {
assertNotNull("Can open", result);
// Extra wait for the bind state.
// TODO fix this and other while {} in this class in a more elegant way.
//noinspection StatementWithEmptyBody
while (controller.getBindState() != CameraEngine.STATE_STARTED) {}
} else {
assertNull("Should not open", result);
@ -155,7 +156,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
private void closeSync(boolean expectSuccess) {
camera.close();
final Op<Boolean> close = new Op<>(true);
doEndTask(close, true).when(listener).onCameraClosed();
doEndOp(close, true).when(listener).onCameraClosed();
Boolean result = close.await(DELAY);
if (expectSuccess) {
assertNotNull("Can close", result);
@ -167,16 +168,24 @@ public abstract class CameraIntegrationTest extends BaseTest {
@SuppressWarnings("UnusedReturnValue")
@Nullable
private VideoResult waitForVideoResult(boolean expectSuccess) {
// CountDownLatch for onVideoRecordingEnd.
CountDownLatch onVideoRecordingEnd = new CountDownLatch(1);
doCountDown(onVideoRecordingEnd).when(listener).onVideoRecordingEnd();
// Op for onVideoTaken.
final Op<VideoResult> video = new Op<>(true);
doEndTask(video, 0).when(listener).onVideoTaken(any(VideoResult.class));
doEndTask(video, null).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
doEndOp(video, 0).when(listener).onVideoTaken(any(VideoResult.class));
doEndOp(video, null).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
@Override
public boolean matches(CameraException argument) {
return argument.getReason() == CameraException.REASON_VIDEO_FAILED;
}
}));
// Wait for onVideoTaken and check.
VideoResult result = video.await(VIDEO_DELAY);
if (expectSuccess) {
assertEquals("Should call onVideoRecordingEnd", 0, onVideoRecordingEnd.getCount());
assertNotNull("Should end video", result);
} else {
assertNull("Should not end video", result);
@ -187,8 +196,8 @@ public abstract class CameraIntegrationTest extends BaseTest {
@Nullable
private PictureResult waitForPictureResult(boolean expectSuccess) {
final Op<PictureResult> pic = new Op<>(true);
doEndTask(pic, 0).when(listener).onPictureTaken(any(PictureResult.class));
doEndTask(pic, null).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
doEndOp(pic, 0).when(listener).onPictureTaken(any(PictureResult.class));
doEndOp(pic, null).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
@Override
public boolean matches(CameraException argument) {
return argument.getReason() == CameraException.REASON_PICTURE_FAILED;
@ -209,14 +218,14 @@ public abstract class CameraIntegrationTest extends BaseTest {
private void takeVideoSync(boolean expectSuccess, int duration) {
final Op<Boolean> op = new Op<>(true);
doEndTask(op, true).when(listener).onVideoRecordingStart();
doEndTask(op, false).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
doEndOp(op, true).when(listener).onVideoRecordingStart();
doEndOp(op, false).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
@Override
public boolean matches(CameraException argument) {
return argument.getReason() == CameraException.REASON_VIDEO_FAILED;
}
}));
File file = new File(context().getFilesDir(), "video.mp4");
File file = new File(getContext().getFilesDir(), "video.mp4");
if (duration > 0) {
camera.takeVideo(file, duration);
} else {
@ -231,21 +240,21 @@ public abstract class CameraIntegrationTest extends BaseTest {
}
}
@SuppressWarnings("unused")
@SuppressWarnings({"unused", "SameParameterValue"})
private void takeVideoSnapshotSync(boolean expectSuccess) {
takeVideoSnapshotSync(expectSuccess,0);
}
private void takeVideoSnapshotSync(boolean expectSuccess, int duration) {
final Op<Boolean> op = new Op<>(true);
doEndTask(op, true).when(listener).onVideoRecordingStart();
doEndTask(op, false).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
doEndOp(op, true).when(listener).onVideoRecordingStart();
doEndOp(op, false).when(listener).onCameraError(argThat(new ArgumentMatcher<CameraException>() {
@Override
public boolean matches(CameraException argument) {
return argument.getReason() == CameraException.REASON_VIDEO_FAILED;
}
}));
File file = new File(context().getFilesDir(), "video.mp4");
File file = new File(getContext().getFilesDir(), "video.mp4");
if (duration > 0) {
camera.takeVideoSnapshot(file, duration);
} else {
@ -541,11 +550,10 @@ public abstract class CameraIntegrationTest extends BaseTest {
@Test
public void testEndVideoSnapshot_withMaxSize() {
// TODO
// camera.setVideoMaxSize(3000*1000);
// waitForOpen(true);
// waitForVideoStart();
// waitForVideoEnd(true);
camera.setVideoMaxSize(3000*1000);
openSync(true);
takeVideoSnapshotSync(true);
waitForVideoResult(true);
}
@Test
@ -559,11 +567,10 @@ public abstract class CameraIntegrationTest extends BaseTest {
@Test
public void testEndVideoSnapshot_withMaxDuration() {
// TODO
// camera.setVideoMaxDuration(4000);
// waitForOpen(true);
// waitForVideoStart();
// waitForVideoEnd(true);
camera.setVideoMaxDuration(4000);
openSync(true);
takeVideoSnapshotSync(true);
waitForVideoResult(true);
}
//endregion
@ -575,7 +582,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
CameraOptions o = openSync(true);
final Op<PointF> focus = new Op<>(true);
doEndTask(focus, 0).when(listener).onAutoFocusStart(any(PointF.class));
doEndOp(focus, 0).when(listener).onAutoFocusStart(any(PointF.class));
camera.startAutoFocus(1, 1);
PointF point = focus.await(300);
@ -592,7 +599,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
CameraOptions o = openSync(true);
final Op<PointF> focus = new Op<>(true);
doEndTask(focus, 1).when(listener).onAutoFocusEnd(anyBoolean(), any(PointF.class));
doEndOp(focus, 1).when(listener).onAutoFocusEnd(anyBoolean(), any(PointF.class));
camera.startAutoFocus(1, 1);
// Stop is not guaranteed to be called, we use a delay. So wait at least the delay time.
@ -632,7 +639,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
@SuppressWarnings("StatementWithEmptyBody")
@Test
public void testCapturePicture_size() throws Exception {
public void testCapturePicture_size() {
openSync(true);
// PictureSize can still be null after opened.
// TODO be more elegant
@ -682,7 +689,7 @@ public abstract class CameraIntegrationTest extends BaseTest {
@SuppressWarnings("StatementWithEmptyBody")
@Test
public void testCaptureSnapshot_size() throws Exception {
public void testCaptureSnapshot_size() {
openSync(true);
// SnapshotSize can still be null after opened.
// TODO be more elegant

@ -9,22 +9,18 @@ import com.google.android.gms.tasks.Tasks;
import com.otaliastudios.cameraview.CameraOptions;
import com.otaliastudios.cameraview.PictureResult;
import com.otaliastudios.cameraview.VideoResult;
import com.otaliastudios.cameraview.controls.Audio;
import com.otaliastudios.cameraview.controls.Facing;
import com.otaliastudios.cameraview.controls.Flash;
import com.otaliastudios.cameraview.frame.FrameManager;
import com.otaliastudios.cameraview.gesture.Gesture;
import com.otaliastudios.cameraview.controls.Hdr;
import com.otaliastudios.cameraview.controls.Mode;
import com.otaliastudios.cameraview.controls.WhiteBalance;
import com.otaliastudios.cameraview.size.AspectRatio;
import com.otaliastudios.cameraview.size.Size;
import com.otaliastudios.cameraview.size.SizeSelector;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import java.io.File;
import java.util.ArrayList;
import java.util.List;
@ -177,9 +173,4 @@ public class MockCameraEngine extends CameraEngine {
protected boolean collectCameraInfo(@NonNull Facing facing) {
return true;
}
@Override
public void onVideoRecordingStart() {
}
}

@ -40,7 +40,7 @@ public abstract class GestureFinderTest<T extends GestureFinder> extends BaseTes
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
TestActivity a = rule.getActivity();

@ -1,8 +1,6 @@
package com.otaliastudios.cameraview.gesture;
import android.content.Context;
import androidx.annotation.NonNull;
import androidx.test.espresso.ViewAction;
import androidx.test.ext.junit.runners.AndroidJUnit4;
@ -11,10 +9,8 @@ import androidx.test.filters.SmallTest;
import org.junit.Test;
import org.junit.runner.RunWith;
import static androidx.test.espresso.matcher.ViewMatchers.withId;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
@RunWith(AndroidJUnit4.class)

@ -1,8 +1,6 @@
package com.otaliastudios.cameraview.gesture;
import android.content.Context;
import androidx.annotation.NonNull;
import androidx.test.espresso.ViewAction;
import androidx.test.ext.junit.runners.AndroidJUnit4;
@ -11,13 +9,10 @@ import androidx.test.filters.SmallTest;
import org.junit.Test;
import org.junit.runner.RunWith;
import static androidx.test.espresso.action.ViewActions.click;
import static androidx.test.espresso.action.ViewActions.swipeDown;
import static androidx.test.espresso.action.ViewActions.swipeLeft;
import static androidx.test.espresso.action.ViewActions.swipeRight;
import static androidx.test.espresso.action.ViewActions.swipeUp;
import static androidx.test.espresso.matcher.ViewMatchers.withId;
import static junit.framework.Assert.assertNotNull;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertTrue;
@ -33,7 +28,7 @@ public class ScrollGestureFinderTest extends GestureFinderTest<ScrollGestureFind
@Test
public void testDefaults() {
assertNull(finder.getGesture());
assertNull(finder.mType);
assertEquals(finder.getPoints().length, 2);
assertEquals(finder.getPoints()[0].x, 0, 0);
assertEquals(finder.getPoints()[0].y, 0, 0);

@ -1,8 +1,6 @@
package com.otaliastudios.cameraview.gesture;
import android.content.Context;
import androidx.annotation.NonNull;
import androidx.test.espresso.action.GeneralClickAction;
import androidx.test.espresso.action.GeneralLocation;
@ -32,7 +30,7 @@ public class TapGestureFinderTest extends GestureFinderTest<TapGestureFinder> {
@Test
public void testDefaults() {
assertNull(finder.getGesture());
assertNull(finder.mType);
assertEquals(finder.getPoints().length, 1);
assertEquals(finder.getPoints()[0].x, 0, 0);
assertEquals(finder.getPoints()[0].y, 0, 0);

@ -27,7 +27,7 @@ public class GridLinesLayoutTest extends BaseTest {
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
TestActivity a = rule.getActivity();

@ -4,7 +4,6 @@ package com.otaliastudios.cameraview.internal.utils;
import android.graphics.Rect;
import com.otaliastudios.cameraview.BaseTest;
import com.otaliastudios.cameraview.internal.utils.CropHelper;
import com.otaliastudios.cameraview.size.AspectRatio;
import com.otaliastudios.cameraview.size.Size;
@ -15,11 +14,8 @@ import org.junit.Test;
import org.junit.runner.RunWith;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertTrue;
import static org.mockito.Matchers.any;
import static org.mockito.Mockito.mock;
@RunWith(AndroidJUnit4.class)
@SmallTest

@ -6,7 +6,6 @@ import androidx.test.filters.SmallTest;
import android.view.OrientationEventListener;
import com.otaliastudios.cameraview.BaseTest;
import com.otaliastudios.cameraview.internal.utils.OrientationHelper;
import org.junit.After;
import org.junit.Before;
@ -25,11 +24,11 @@ public class OrientationHelperTest extends BaseTest {
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
callback = mock(OrientationHelper.Callback.class);
helper = new OrientationHelper(context(), callback);
helper = new OrientationHelper(getContext(), callback);
}
});
}
@ -46,12 +45,12 @@ public class OrientationHelperTest extends BaseTest {
assertEquals(helper.getDisplayOffset(), -1);
assertEquals(helper.getDeviceOrientation(), -1);
helper.enable(context());
helper.enable(getContext());
assertNotNull(helper.mListener);
assertNotEquals(helper.getDisplayOffset(), -1); // Don't know about device orientation.
// Ensure nothing bad if called twice.
helper.enable(context());
helper.enable(getContext());
assertNotNull(helper.mListener);
assertNotEquals(helper.getDisplayOffset(), -1);
@ -66,7 +65,7 @@ public class OrientationHelperTest extends BaseTest {
// Sometimes (on some APIs) the helper will trigger an update to 0
// right after enabling. But that's fine for us, times(1) will be OK either way.
helper.enable(context());
helper.enable(getContext());
helper.mListener.onOrientationChanged(OrientationEventListener.ORIENTATION_UNKNOWN);
assertEquals(helper.getDeviceOrientation(), 0);
helper.mListener.onOrientationChanged(10);

@ -11,7 +11,6 @@ import org.junit.Assert;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.junit.runner.manipulation.Filter;
import org.mockito.Mockito;
import androidx.test.annotation.UiThreadTest;
@ -29,7 +28,7 @@ public class MarkerLayoutTest extends BaseTest {
@Before
public void setUp() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
TestActivity a = rule.getActivity();

@ -1,17 +1,12 @@
package com.otaliastudios.cameraview.overlay;
import android.content.res.Resources;
import android.content.res.TypedArray;
import android.content.res.XmlResourceParser;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.util.Xml;
import android.view.Gravity;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.FrameLayout;
import androidx.annotation.NonNull;
import androidx.test.annotation.UiThreadTest;
@ -19,35 +14,17 @@ import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.filters.SmallTest;
import com.otaliastudios.cameraview.BaseTest;
import com.otaliastudios.cameraview.R;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.ArgumentCaptor;
import org.mockito.ArgumentMatcher;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import org.w3c.dom.Attr;
import org.xmlpull.v1.XmlPullParser;
import org.xmlpull.v1.XmlPullParserException;
import java.io.IOException;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyFloat;
import static org.mockito.ArgumentMatchers.anyLong;
import static org.mockito.ArgumentMatchers.argThat;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.ArgumentMatchers.notNull;
import static org.mockito.Mockito.doNothing;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.never;
import static org.mockito.Mockito.reset;
import static org.mockito.Mockito.spy;
@ -63,7 +40,7 @@ public class OverlayLayoutTest extends BaseTest {
@Before
public void setUp() {
overlayLayout = spy(new OverlayLayout(context()));
overlayLayout = spy(new OverlayLayout(getContext()));
}
@After
@ -97,7 +74,7 @@ public class OverlayLayoutTest extends BaseTest {
@NonNull
private AttributeSet getAttributeSet(int layout) throws Exception {
// Get the attribute set in the correct state: use a parser and move to START_TAG
XmlResourceParser parser = context().getResources().getLayout(layout);
XmlResourceParser parser = getContext().getResources().getLayout(layout);
//noinspection StatementWithEmptyBody
while (parser.next() != XmlResourceParser.START_TAG) {}
return Xml.asAttributeSet(parser);
@ -132,7 +109,7 @@ public class OverlayLayoutTest extends BaseTest {
public void testDrawChild() {
Canvas canvas = new Canvas();
OverlayLayout.LayoutParams params = new OverlayLayout.LayoutParams(10, 10);
View child = new View(context());
View child = new View(getContext());
child.setLayoutParams(params);
when(overlayLayout.doDrawChild(canvas, child, 0)).thenReturn(true);
@ -169,7 +146,7 @@ public class OverlayLayoutTest extends BaseTest {
@Test
public void testDrawOn() {
Canvas canvas = spy(new Canvas());
View child = new View(context());
View child = new View(getContext());
OverlayLayout.LayoutParams params = new OverlayLayout.LayoutParams(10, 10);
params.drawOnPreview = true;
params.drawOnPictureSnapshot = true;

@ -3,7 +3,6 @@ package com.otaliastudios.cameraview.picture;
import com.otaliastudios.cameraview.BaseTest;
import com.otaliastudios.cameraview.PictureResult;
import com.otaliastudios.cameraview.picture.PictureRecorder;
import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.filters.SmallTest;

@ -32,7 +32,6 @@ public abstract class CameraPreviewTest extends BaseTest {
@Rule
public ActivityTestRule<TestActivity> rule = new ActivityTestRule<>(TestActivity.class);
@SuppressWarnings("WeakerAccess")
protected CameraPreview preview;
@SuppressWarnings("WeakerAccess")
protected Size surfaceSize;
@ -46,7 +45,7 @@ public abstract class CameraPreviewTest extends BaseTest {
available = new Op<>(true);
destroyed = new Op<>(true);
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
TestActivity a = rule.getActivity();
@ -82,7 +81,7 @@ public abstract class CameraPreviewTest extends BaseTest {
// Trigger a destroy.
protected void ensureDestroyed() {
ui(new Runnable() {
uiSync(new Runnable() {
@Override
public void run() {
rule.getActivity().getContentView().removeView(preview.getRootView());

@ -7,10 +7,6 @@ import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.filters.SmallTest;
import android.view.ViewGroup;
import com.otaliastudios.cameraview.preview.CameraPreview;
import com.otaliastudios.cameraview.preview.CameraPreviewTest;
import com.otaliastudios.cameraview.preview.SurfaceCameraPreview;
import org.junit.runner.RunWith;
@RunWith(AndroidJUnit4.class)

@ -7,10 +7,6 @@ import androidx.test.ext.junit.runners.AndroidJUnit4;
import androidx.test.filters.SmallTest;
import android.view.ViewGroup;
import com.otaliastudios.cameraview.preview.CameraPreview;
import com.otaliastudios.cameraview.preview.CameraPreviewTest;
import com.otaliastudios.cameraview.preview.TextureCameraPreview;
import org.junit.runner.RunWith;
@RunWith(AndroidJUnit4.class)

@ -13,9 +13,6 @@ import org.mockito.Mockito;
import java.lang.reflect.Constructor;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNull;
@RunWith(AndroidJUnit4.class)
@SmallTest
@ -27,10 +24,13 @@ public class VideoRecorderTest extends BaseTest {
VideoRecorder.VideoResultListener listener = Mockito.mock(VideoRecorder.VideoResultListener.class);
VideoRecorder recorder = new VideoRecorder(listener) {
@Override
protected void onStart() { dispatchVideoRecordingStart(); }
protected void onStart() {
dispatchVideoRecordingStart();
}
@Override
protected void onStop() {
dispatchVideoRecordingEnd();
dispatchResult();
}
};
@ -38,6 +38,8 @@ public class VideoRecorderTest extends BaseTest {
Mockito.verify(listener,Mockito.times(1) )
.onVideoRecordingStart();
recorder.stop();
Mockito.verify(listener, Mockito.times(1))
.onVideoRecordingEnd();
Mockito.verify(listener, Mockito.times(1))
.onVideoResult(result, null);
}

@ -129,13 +129,30 @@ public abstract class CameraListener {
/**
* Notifies that the actual video recording has started
* Notifies that the actual video recording has started.
* This is the time when actual frames recording starts.
* This can be used to show some indicator while the actual video recording.
*
* This can be used to show some UI indicator for video recording or counting time.
*
* @see #onVideoRecordingEnd()
*/
@UiThread
public void onVideoRecordingStart() {
}
/**
* Notifies that the actual video recording has ended.
* At this point recording has ended, though the file might still be processed.
* The {@link #onVideoTaken(VideoResult)} callback will be called soon.
*
* This can be used to remove UI indicators for video recording.
*
* @see #onVideoRecordingStart()
*/
@UiThread
public void onVideoRecordingEnd() {
}
}

@ -56,22 +56,23 @@ public final class CameraLogger {
@VisibleForTesting static String lastTag;
private static int sLevel;
private static List<Logger> sLoggers;
private static List<Logger> sLoggers = new ArrayList<>();
@VisibleForTesting static Logger sAndroidLogger = new Logger() {
@Override
public void log(int level, @NonNull String tag, @NonNull String message, @Nullable Throwable throwable) {
switch (level) {
case LEVEL_VERBOSE: Log.v(tag, message, throwable); break;
case LEVEL_INFO: Log.i(tag, message, throwable); break;
case LEVEL_WARNING: Log.w(tag, message, throwable); break;
case LEVEL_ERROR: Log.e(tag, message, throwable); break;
}
}
};
static {
setLogLevel(LEVEL_ERROR);
sLoggers = new ArrayList<>();
sLoggers.add(new Logger() {
@Override
public void log(int level, @NonNull String tag, @NonNull String message, @Nullable Throwable throwable) {
switch (level) {
case LEVEL_VERBOSE: Log.v(tag, message, throwable); break;
case LEVEL_INFO: Log.i(tag, message, throwable); break;
case LEVEL_WARNING: Log.w(tag, message, throwable); break;
case LEVEL_ERROR: Log.e(tag, message, throwable); break;
}
}
});
sLoggers.add(sAndroidLogger);
}
/**

@ -6,10 +6,8 @@ import android.hardware.Camera;
import android.hardware.camera2.CameraAccessException;
import android.hardware.camera2.CameraCharacteristics;
import android.hardware.camera2.CameraManager;
import android.hardware.camera2.CaptureRequest;
import android.hardware.camera2.params.StreamConfigurationMap;
import android.media.CamcorderProfile;
import android.media.ImageReader;
import android.media.MediaRecorder;
import android.os.Build;
import android.util.Range;
@ -63,8 +61,6 @@ public class CameraOptions {
private boolean autoFocusSupported;
// Camera1Engine constructor.
@SuppressWarnings("deprecation")
public CameraOptions(@NonNull Camera.Parameters params, boolean flipSizes) {
List<String> strings;
Mapper mapper = Mapper.get(Engine.CAMERA1);
@ -151,7 +147,6 @@ public class CameraOptions {
// Camera2Engine constructor.
@RequiresApi(Build.VERSION_CODES.LOLLIPOP)
@SuppressWarnings("deprecation")
public CameraOptions(@NonNull CameraManager manager, @NonNull String cameraId, boolean flipSizes) throws CameraAccessException {
Mapper mapper = Mapper.get(Engine.CAMERA2);
CameraCharacteristics cameraCharacteristics = manager.getCameraCharacteristics(cameraId);
@ -323,7 +318,6 @@ public class CameraOptions {
*
* @return a collection of supported values.
*/
@SuppressWarnings("WeakerAccess")
@NonNull
public Collection<Size> getSupportedPictureSizes() {
return Collections.unmodifiableSet(supportedPictureSizes);
@ -347,7 +341,6 @@ public class CameraOptions {
*
* @return a collection of supported values.
*/
@SuppressWarnings("WeakerAccess")
@NonNull
public Collection<Size> getSupportedVideoSizes() {
return Collections.unmodifiableSet(supportedVideoSizes);
@ -373,7 +366,6 @@ public class CameraOptions {
* @see Facing#FRONT
* @return a collection of supported values.
*/
@SuppressWarnings("WeakerAccess")
@NonNull
public Collection<Facing> getSupportedFacing() {
return Collections.unmodifiableSet(supportedFacing);
@ -389,7 +381,6 @@ public class CameraOptions {
* @see Flash#TORCH
* @return a collection of supported values.
*/
@SuppressWarnings("WeakerAccess")
@NonNull
public Collection<Flash> getSupportedFlash() {
return Collections.unmodifiableSet(supportedFlash);
@ -406,7 +397,6 @@ public class CameraOptions {
* @see WhiteBalance#CLOUDY
* @return a collection of supported values.
*/
@SuppressWarnings("WeakerAccess")
@NonNull
public Collection<WhiteBalance> getSupportedWhiteBalance() {
return Collections.unmodifiableSet(supportedWhiteBalance);
@ -432,7 +422,6 @@ public class CameraOptions {
*
* @return whether zoom is supported.
*/
@SuppressWarnings("WeakerAccess")
public boolean isZoomSupported() {
return zoomSupported;
}
@ -444,7 +433,6 @@ public class CameraOptions {
*
* @return whether auto focus is supported.
*/
@SuppressWarnings("WeakerAccess")
public boolean isAutoFocusSupported() {
return autoFocusSupported;
}
@ -458,7 +446,6 @@ public class CameraOptions {
* @see #getExposureCorrectionMaxValue()
* @return whether exposure correction is supported.
*/
@SuppressWarnings("WeakerAccess")
public boolean isExposureCorrectionSupported() {
return exposureCorrectionSupported;
}
@ -470,7 +457,6 @@ public class CameraOptions {
*
* @return min EV value
*/
@SuppressWarnings("WeakerAccess")
public float getExposureCorrectionMinValue() {
return exposureCorrectionMinValue;
}
@ -482,7 +468,6 @@ public class CameraOptions {
*
* @return max EV value
*/
@SuppressWarnings("WeakerAccess")
public float getExposureCorrectionMaxValue() {
return exposureCorrectionMaxValue;
}

@ -678,7 +678,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
Context c = getContext();
boolean needsCamera = true;
boolean needsAudio = audio == Audio.ON;
boolean needsAudio = audio == Audio.ON || audio == Audio.MONO || audio == Audio.STEREO;
needsCamera = needsCamera && c.checkSelfPermission(Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED;
needsAudio = needsAudio && c.checkSelfPermission(Manifest.permission.RECORD_AUDIO) != PackageManager.PERMISSION_GRANTED;
@ -696,7 +696,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
* If the developer did not add this to its manifest, throw and fire warnings.
*/
private void checkPermissionsManifestOrThrow(@NonNull Audio audio) {
if (audio == Audio.ON) {
if (audio == Audio.ON || audio == Audio.MONO || audio == Audio.STEREO) {
try {
PackageManager manager = getContext().getPackageManager();
PackageInfo info = manager.getPackageInfo(getContext().getPackageName(), PackageManager.GET_PERMISSIONS);
@ -1174,6 +1174,8 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
*
* @see Audio#OFF
* @see Audio#ON
* @see Audio#MONO
* @see Audio#STEREO
*
* @param audio desired audio value
*/
@ -2078,7 +2080,7 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
@Override
public void dispatchOnVideoRecordingStart() {
mLogger.i("dispatchOnVideoRecordingStart", "dispatchOnVideoRecordingStart");
mLogger.i("dispatchOnVideoRecordingStart");
mUiHandler.post(new Runnable() {
@Override
public void run() {
@ -2088,6 +2090,19 @@ public class CameraView extends FrameLayout implements LifecycleObserver {
}
});
}
@Override
public void dispatchOnVideoRecordingEnd() {
mLogger.i("dispatchOnVideoRecordingEnd");
mUiHandler.post(new Runnable() {
@Override
public void run() {
for (CameraListener listener : mListeners) {
listener.onVideoRecordingEnd();
}
}
});
}
}
//endregion

@ -14,14 +14,26 @@ import androidx.annotation.Nullable;
public enum Audio implements Control {
/**
* No Audio.
* No audio.
*/
OFF(0),
/**
* With Audio.
* Audio on. The number of channels depends on the video configuration,
* on the device capabilities and on the video type (e.g. we default to
* mono for snapshots).
*/
ON(1);
ON(1),
/**
* Force mono channel audio.
*/
MONO(2),
/**
* Force stereo audio.
*/
STEREO(3);
final static Audio DEFAULT = ON;

@ -136,6 +136,7 @@ public abstract class CameraEngine implements
void dispatchFrame(Frame frame);
void dispatchError(CameraException exception);
void dispatchOnVideoRecordingStart();
void dispatchOnVideoRecordingEnd();
}
private static final String TAG = CameraEngine.class.getSimpleName();
@ -1210,6 +1211,10 @@ public abstract class CameraEngine implements
mCallback.dispatchOnVideoRecordingStart();
}
@Override
public void onVideoRecordingEnd() {
mCallback.dispatchOnVideoRecordingEnd();
}
@WorkerThread
protected abstract void onTakePicture(@NonNull PictureResult.Stub stub);

@ -3,6 +3,8 @@ package com.otaliastudios.cameraview.gesture;
import android.content.Context;
import android.graphics.PointF;
import androidx.annotation.NonNull;
import androidx.annotation.VisibleForTesting;
import android.view.MotionEvent;
/**
@ -22,7 +24,7 @@ public abstract class GestureFinder {
private final static int GRANULARITY = 50;
private boolean mActive;
private Gesture mType;
@VisibleForTesting Gesture mType;
private PointF[] mPoints;
private Controller mController;

@ -9,7 +9,7 @@ import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
/**
* Base class for pools of recycleable objects.
* Base class for thread-safe pools of recycleable objects.
* @param <T> the object type
*/
public class Pool<T> {
@ -19,8 +19,9 @@ public class Pool<T> {
private int maxPoolSize;
private int activeCount;
private LinkedBlockingQueue<T> mQueue;
private LinkedBlockingQueue<T> queue;
private Factory<T> factory;
private final Object lock = new Object();
/**
* Used to create new instances of objects when needed.
@ -37,7 +38,7 @@ public class Pool<T> {
*/
public Pool(int maxPoolSize, @NonNull Factory<T> factory) {
this.maxPoolSize = maxPoolSize;
this.mQueue = new LinkedBlockingQueue<>(maxPoolSize);
this.queue = new LinkedBlockingQueue<>(maxPoolSize);
this.factory = factory;
}
@ -48,7 +49,9 @@ public class Pool<T> {
* @return whether the pool is empty
*/
public boolean isEmpty() {
return count() >= maxPoolSize;
synchronized (lock) {
return count() >= maxPoolSize;
}
}
/**
@ -60,21 +63,23 @@ public class Pool<T> {
*/
@Nullable
public T get() {
T item = mQueue.poll();
if (item != null) {
activeCount++; // poll decreases, this fixes
LOG.v("GET - Reusing recycled item.", this);
return item;
}
if (isEmpty()) {
LOG.v("GET - Returning null. Too much items requested.", this);
return null;
synchronized (lock) {
T item = queue.poll();
if (item != null) {
activeCount++; // poll decreases, this fixes
LOG.v("GET - Reusing recycled item.", this);
return item;
}
if (isEmpty()) {
LOG.v("GET - Returning null. Too much items requested.", this);
return null;
}
activeCount++;
LOG.v("GET - Creating a new item.", this);
return factory.create();
}
activeCount++;
LOG.v("GET - Creating a new item.", this);
return factory.create();
}
/**
@ -84,16 +89,18 @@ public class Pool<T> {
* @param item used item
*/
public void recycle(@NonNull T item) {
LOG.v("RECYCLE - Recycling item.", this);
if (--activeCount < 0) {
throw new IllegalStateException("Trying to recycle an item which makes activeCount < 0." +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
}
if (!mQueue.offer(item)) {
throw new IllegalStateException("Trying to recycle an item while the queue is full. " +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
synchronized (lock) {
LOG.v("RECYCLE - Recycling item.", this);
if (--activeCount < 0) {
throw new IllegalStateException("Trying to recycle an item which makes activeCount < 0." +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
}
if (!queue.offer(item)) {
throw new IllegalStateException("Trying to recycle an item while the queue is full. " +
"This means that this or some previous items being recycled were not coming from " +
"this pool, or some item was recycled more than once. " + this);
}
}
}
@ -102,7 +109,9 @@ public class Pool<T> {
*/
@CallSuper
public void clear() {
mQueue.clear();
synchronized (lock) {
queue.clear();
}
}
/**
@ -114,7 +123,9 @@ public class Pool<T> {
*/
@SuppressWarnings("WeakerAccess")
public final int count() {
return activeCount() + recycledCount();
synchronized (lock) {
return activeCount() + recycledCount();
}
}
/**
@ -125,7 +136,9 @@ public class Pool<T> {
*/
@SuppressWarnings("WeakerAccess")
public final int activeCount() {
return activeCount;
synchronized (lock) {
return activeCount;
}
}
/**
@ -137,7 +150,9 @@ public class Pool<T> {
*/
@SuppressWarnings("WeakerAccess")
public final int recycledCount() {
return mQueue.size();
synchronized (lock) {
return queue.size();
}
}
@NonNull

@ -63,7 +63,7 @@ public class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture
private int mOutputTextureId = 0;
private SurfaceTexture mInputSurfaceTexture;
private EglViewport mOutputViewport;
private Set<RendererFrameCallback> mRendererFrameCallbacks = Collections.synchronizedSet(new HashSet<RendererFrameCallback>());
private final Set<RendererFrameCallback> mRendererFrameCallbacks = Collections.synchronizedSet(new HashSet<RendererFrameCallback>());
@VisibleForTesting float mCropScaleX = 1F;
@VisibleForTesting float mCropScaleY = 1F;
private View mRootView;
@ -144,8 +144,11 @@ public class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture
getView().queueEvent(new Runnable() {
@Override
public void run() {
for (RendererFrameCallback callback : mRendererFrameCallbacks) {
callback.onRendererTextureCreated(mOutputTextureId);
// Need to synchronize when iterating the Collections.synchronizedSet
synchronized (mRendererFrameCallbacks) {
for (RendererFrameCallback callback : mRendererFrameCallbacks) {
callback.onRendererTextureCreated(mOutputTextureId);
}
}
}
});
@ -202,11 +205,12 @@ public class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture
Matrix.translateM(mTransformMatrix, 0, translX, translY, 0);
Matrix.scaleM(mTransformMatrix, 0, mCropScaleX, mCropScaleY, 1);
}
// Future note: passing scale to the viewport?
// They are scaleX an scaleY, but flipped based on mInputFlipped.
mOutputViewport.drawFrame(mOutputTextureId, mTransformMatrix);
for (RendererFrameCallback callback : mRendererFrameCallbacks) {
callback.onRendererFrame(mInputSurfaceTexture, mCropScaleX, mCropScaleY);
synchronized (mRendererFrameCallbacks) {
// Need to synchronize when iterating the Collections.synchronizedSet
for (RendererFrameCallback callback : mRendererFrameCallbacks) {
callback.onRendererFrame(mInputSurfaceTexture, mCropScaleX, mCropScaleY);
}
}
}
}
@ -299,6 +303,7 @@ public class GlCameraPreview extends CameraPreview<GLSurfaceView, SurfaceTexture
* Creates the renderer for this GL surface.
* @return the renderer for this GL surface
*/
@SuppressWarnings("WeakerAccess")
@NonNull
protected Renderer instantiateRenderer() {
return new Renderer();

@ -37,17 +37,16 @@ public abstract class FullVideoRecorder extends VideoRecorder {
super(listener);
}
@SuppressWarnings({"WeakerAccess", "UnusedReturnValue", "BooleanMethodIsAlwaysInverted"})
@SuppressWarnings({"WeakerAccess", "UnusedReturnValue"})
protected boolean prepareMediaRecorder(@NonNull VideoResult.Stub stub) {
if (mMediaRecorderPrepared) return true;
return onPrepareMediaRecorder(stub, new MediaRecorder());
}
@SuppressWarnings("WeakerAccess")
protected boolean onPrepareMediaRecorder(@NonNull VideoResult.Stub stub, @NonNull MediaRecorder mediaRecorder) {
mMediaRecorder = mediaRecorder;
Size size = stub.rotation % 180 != 0 ? stub.size.flip() : stub.size;
if (stub.audio == Audio.ON) {
if (stub.audio == Audio.ON || stub.audio == Audio.MONO || stub.audio == Audio.STEREO) {
// Must be called before setOutputFormat.
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.DEFAULT);
}
@ -71,8 +70,15 @@ public abstract class FullVideoRecorder extends VideoRecorder {
} else {
mMediaRecorder.setVideoEncodingBitRate(stub.videoBitRate);
}
if (stub.audio == Audio.ON) {
mMediaRecorder.setAudioChannels(mProfile.audioChannels);
if (stub.audio == Audio.ON || stub.audio == Audio.MONO || stub.audio == Audio.STEREO) {
if (stub.audio == Audio.ON) {
mMediaRecorder.setAudioChannels(mProfile.audioChannels);
} else if (stub.audio == Audio.MONO) {
mMediaRecorder.setAudioChannels(1);
} else //noinspection ConstantConditions
if (stub.audio == Audio.STEREO) {
mMediaRecorder.setAudioChannels(2);
}
mMediaRecorder.setAudioSamplingRate(mProfile.audioSampleRate);
mMediaRecorder.setAudioEncoder(mProfile.audioCodec);
if (stub.audioBitRate <= 0) {
@ -142,6 +148,7 @@ public abstract class FullVideoRecorder extends VideoRecorder {
@Override
protected void onStop() {
if (mMediaRecorder != null) {
dispatchVideoRecordingEnd();
try {
mMediaRecorder.stop();
} catch (Exception e) {

@ -18,9 +18,11 @@ import com.otaliastudios.cameraview.preview.GlCameraPreview;
import com.otaliastudios.cameraview.preview.RendererFrameCallback;
import com.otaliastudios.cameraview.preview.RendererThread;
import com.otaliastudios.cameraview.size.Size;
import com.otaliastudios.cameraview.video.encoding.AudioConfig;
import com.otaliastudios.cameraview.video.encoding.AudioMediaEncoder;
import com.otaliastudios.cameraview.video.encoding.EncoderThread;
import com.otaliastudios.cameraview.video.encoding.MediaEncoderEngine;
import com.otaliastudios.cameraview.video.encoding.TextureConfig;
import com.otaliastudios.cameraview.video.encoding.TextureMediaEncoder;
import androidx.annotation.NonNull;
@ -38,9 +40,15 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
private static final CameraLogger LOG = CameraLogger.create(TAG);
private static final int DEFAULT_VIDEO_FRAMERATE = 30;
private static final int DEFAULT_VIDEO_BITRATE = 1000000;
private static final int DEFAULT_AUDIO_BITRATE = 64000;
// https://stackoverflow.com/a/5220554/4288782
// Assuming low motion, we don't want to put this too high for default usage,
// advanced users are still free to change this for each video.
private static int estimateVideoBitRate(@NonNull Size size, int frameRate) {
return (int) (0.07F * 1F * size.getWidth() * size.getHeight() * frameRate);
}
private static final int STATE_RECORDING = 0;
private static final int STATE_NOT_RECORDING = 1;
@ -73,7 +81,6 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
protected void onStart() {
mPreview.addRendererFrameCallback(this);
mDesiredState = STATE_RECORDING;
dispatchVideoRecordingStart();
}
@Override
@ -101,8 +108,8 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
LOG.i("Starting the encoder engine.");
// Set default options
if (mResult.videoBitRate <= 0) mResult.videoBitRate = DEFAULT_VIDEO_BITRATE;
if (mResult.videoFrameRate <= 0) mResult.videoFrameRate = DEFAULT_VIDEO_FRAMERATE;
if (mResult.videoBitRate <= 0) mResult.videoBitRate = estimateVideoBitRate(mResult.size, mResult.videoFrameRate);
if (mResult.audioBitRate <= 0) mResult.audioBitRate = DEFAULT_AUDIO_BITRATE;
// Video. Ensure width and height are divisible by 2, as I have read somewhere.
@ -118,22 +125,31 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
case DEVICE_DEFAULT: type = "video/avc"; break;
}
LOG.w("Creating frame encoder. Rotation:", mResult.rotation);
TextureMediaEncoder.Config config = new TextureMediaEncoder.Config(width, height,
mResult.videoBitRate,
mResult.videoFrameRate,
mResult.rotation,
type, mTextureId,
scaleX, scaleY,
EGL14.eglGetCurrentContext(),
mHasOverlay ? mOverlayTextureId : TextureMediaEncoder.NO_TEXTURE,
mOverlayRotation
);
TextureMediaEncoder videoEncoder = new TextureMediaEncoder(config);
TextureConfig videoConfig = new TextureConfig();
videoConfig.width = width;
videoConfig.height = height;
videoConfig.bitRate = mResult.videoBitRate;
videoConfig.frameRate = mResult.videoFrameRate;
videoConfig.rotation = mResult.rotation;
videoConfig.mimeType = type;
videoConfig.textureId = mTextureId;
videoConfig.scaleX = scaleX;
videoConfig.scaleY = scaleY;
videoConfig.eglContext = EGL14.eglGetCurrentContext();
if (mHasOverlay) {
videoConfig.overlayTextureId = mOverlayTextureId;
videoConfig.overlayRotation = mOverlayRotation;
}
TextureMediaEncoder videoEncoder = new TextureMediaEncoder(videoConfig);
// Audio
AudioMediaEncoder audioEncoder = null;
if (mResult.audio == Audio.ON) {
audioEncoder = new AudioMediaEncoder(new AudioMediaEncoder.Config(mResult.audioBitRate));
if (mResult.audio == Audio.ON || mResult.audio == Audio.MONO || mResult.audio == Audio.STEREO) {
AudioConfig audioConfig = new AudioConfig();
audioConfig.bitRate = mResult.audioBitRate;
if (mResult.audio == Audio.MONO) audioConfig.channels = 1;
if (mResult.audio == Audio.STEREO) audioConfig.channels = 2;
audioEncoder = new AudioMediaEncoder(audioConfig);
}
// Engine
@ -147,9 +163,10 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
if (mCurrentState == STATE_RECORDING) {
LOG.v("dispatching frame.");
TextureMediaEncoder textureEncoder = (TextureMediaEncoder) mEncoderEngine.getVideoEncoder();
TextureMediaEncoder.TextureFrame textureFrame = textureEncoder.acquireFrame();
textureFrame.timestamp = surfaceTexture.getTimestamp();
surfaceTexture.getTransformMatrix(textureFrame.transform);
TextureMediaEncoder.Frame frame = textureEncoder.acquireFrame();
frame.timestamp = surfaceTexture.getTimestamp();
frame.timestampMillis = System.currentTimeMillis(); // NOTE: this is an approximation but it seems to work.
surfaceTexture.getTransformMatrix(frame.transform);
// get overlay
if (mHasOverlay) {
@ -162,12 +179,12 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
LOG.w("Got Surface.OutOfResourcesException while drawing video overlays", e);
}
mOverlaySurfaceTexture.updateTexImage();
mOverlaySurfaceTexture.getTransformMatrix(textureFrame.overlayTransform);
mOverlaySurfaceTexture.getTransformMatrix(frame.overlayTransform);
}
if (mEncoderEngine != null) {
// can happen on teardown
mEncoderEngine.notify(TextureMediaEncoder.FRAME_EVENT, textureFrame);
mEncoderEngine.notify(TextureMediaEncoder.FRAME_EVENT, frame);
}
}
@ -192,7 +209,12 @@ public class SnapshotVideoRecorder extends VideoRecorder implements RendererFram
@Override
public void onEncodingStart() {
// Do nothing.
dispatchVideoRecordingStart();
}
@Override
public void onEncodingStop() {
dispatchVideoRecordingEnd();
}
@EncoderThread

@ -29,10 +29,16 @@ public abstract class VideoRecorder {
* The callback for the actual video recording starting.
*/
void onVideoRecordingStart();
/**
* Video recording has ended. We will finish processing the file
* and soon {@link #onVideoResult(VideoResult.Stub, Exception)} will be called.
*/
void onVideoRecordingEnd();
}
@VisibleForTesting(otherwise = VisibleForTesting.PROTECTED) VideoResult.Stub mResult;
@VisibleForTesting final VideoResultListener mListener;
private final VideoResultListener mListener;
@SuppressWarnings("WeakerAccess")
protected Exception mError;
private boolean mIsRecording;
@ -96,9 +102,20 @@ public abstract class VideoRecorder {
*/
@SuppressWarnings("WeakerAccess")
@CallSuper
protected void dispatchVideoRecordingStart(){
if(mListener != null){
protected void dispatchVideoRecordingStart() {
if (mListener != null) {
mListener.onVideoRecordingStart();
}
}
/**
* Subclasses can call this to notify that the video recording has ended,
* although the video result might still be processed.
*/
@CallSuper
protected void dispatchVideoRecordingEnd() {
if (mListener != null) {
mListener.onVideoRecordingEnd();
}
}
}

@ -0,0 +1,96 @@
package com.otaliastudios.cameraview.video.encoding;
import android.media.AudioFormat;
import androidx.annotation.NonNull;
/**
* Audio configuration to be passed as input to the constructor
* of an {@link AudioMediaEncoder}.
*/
@SuppressWarnings("WeakerAccess")
public class AudioConfig {
// Configurable options
public int bitRate; // ENCODED bit rate
public int channels = 1;
// Not configurable options (for now)
final String mimeType = "audio/mp4a-latm";
final int encoding = AudioFormat.ENCODING_PCM_16BIT; // Determines the sampleSizePerChannel
// The 44.1KHz frequency is the only setting guaranteed to be available on all devices.
final int samplingFrequency = 44100; // samples/sec
final int sampleSizePerChannel = 2; // byte/sample/channel [16bit]
final int byteRatePerChannel = samplingFrequency * sampleSizePerChannel; // byte/sec/channel
@NonNull
AudioConfig copy() {
AudioConfig config = new AudioConfig();
config.bitRate = this.bitRate;
config.channels = this.channels;
return config;
}
int byteRate() { // RAW byte rate
return byteRatePerChannel * channels; // byte/sec
}
@SuppressWarnings("unused")
int bitRate() { // RAW bit rate
return byteRate() * 8; // bit/sec
}
int audioFormatChannels() {
if (channels == 1) {
return AudioFormat.CHANNEL_IN_MONO;
} else if (channels == 2) {
return AudioFormat.CHANNEL_IN_STEREO;
}
throw new RuntimeException("Invalid number of channels: " + channels);
}
/**
* We call FRAME here the chunk of data that we want to read at each loop cycle.
*
* When this number is HIGH, the AudioRecord might be unable to keep a good pace and
* we might end up skip some frames.
*
* When this number is LOW, we pull a bigger number of frames and this might end up
* delaying our recorder/encoder balance (more frames means more encoding operations).
* In the end, this means that the recorder will skip some frames to restore the balance.
*
* @return the frame size
*/
int frameSize() {
return 1024 * channels;
}
/**
* Number of frames contained in the {@link android.media.AudioRecord} buffer.
* In theory, the higher this value is, the safer it is to delay reading as the
* audioRecord will hold the recorded samples anyway and return to us next time we read.
*
* Should be coordinated with {@link #frameSize()}.
*
* @return the number of frames
*/
int audioRecordBufferFrames() {
return 25;
}
/**
* We allocate buffers of {@link #frameSize()} each, which is not much.
*
* This value indicates the maximum number of these buffers that we can allocate at a given instant.
* This value is the number of runnables that the encoder thread is allowed to be 'behind'
* the recorder thread. It's not safe to have it very large or we can end encoding A LOT AFTER
* the actual recording. It's better to reduce this and skip recording at all.
*
* Should be coordinated with {@link #frameSize()}.
*
* @return the buffer pool max size
*/
int bufferPoolMaxSize() {
return 80;
}
}

@ -1,133 +1,101 @@
package com.otaliastudios.cameraview.video.encoding;
import android.annotation.SuppressLint;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.AudioTimestamp;
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.media.MediaRecorder;
import android.os.Build;
import android.os.Handler;
import android.os.Message;
import com.otaliastudios.cameraview.CameraLogger;
import com.otaliastudios.cameraview.internal.utils.WorkerHandler;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.RequiresApi;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.LinkedBlockingQueue;
/**
* Default implementation for audio encoding.
*/
// TODO create onVideoRecordingStart/onVideoRecordingEnd callbacks
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
public class AudioMediaEncoder extends MediaEncoder {
private static final String TAG = AudioMediaEncoder.class.getSimpleName();
private static final CameraLogger LOG = CameraLogger.create(TAG);
private static final String MIME_TYPE = "audio/mp4a-latm";
private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; // Determines the SAMPLE_SIZE
private static final int CHANNELS = AudioFormat.CHANNEL_IN_MONO; // AudioFormat.CHANNEL_IN_STEREO;
// The 44.1KHz frequency is the only setting guaranteed to be available on all devices.
private static final int SAMPLING_FREQUENCY = 44100; // samples/sec
private static final int CHANNELS_COUNT = 1; // 2;
private static final int SAMPLE_SIZE = 2; // byte/sample/channel
private static final int BYTE_RATE_PER_CHANNEL = SAMPLING_FREQUENCY * SAMPLE_SIZE; // byte/sec/channel
private static final int BYTE_RATE = BYTE_RATE_PER_CHANNEL * CHANNELS_COUNT; // byte/sec
@SuppressWarnings("unused")
private static final int BIT_RATE = BYTE_RATE * 8; // bit/sec
// We call FRAME here the chunk of data that we want to read at each loop cycle
private static final int FRAME_SIZE_PER_CHANNEL = 1024; // bytes/frame/channel [AAC constant]
private static final int FRAME_SIZE = FRAME_SIZE_PER_CHANNEL * CHANNELS_COUNT; // bytes/frame
// We allocate buffers of 1KB each, which is not so much. This value indicates the maximum
// number of these buffers that we can allocate at a given instant.
// This value is the number of runnables that the encoder thread is allowed to be 'behind'
// the recorder thread. It's not safe to have it very large or we can end encoding A LOT AFTER
// the actual recording. It's better to reduce this and skip recording at all.
private static final int BUFFER_POOL_MAX_SIZE = 60;
private static long bytesToUs(int bytes) {
return (1000000L * bytes) / BYTE_RATE;
}
private static long bytesToUs(long bytes) {
return (1000000L * bytes) / BYTE_RATE;
}
private static final boolean PERFORMANCE_DEBUG = false;
private static final boolean PERFORMANCE_FILL_GAPS = true;
private boolean mRequestStop = false;
private AudioEncodingHandler mEncoder;
private AudioEncodingThread mEncoder;
private AudioRecordingThread mRecorder;
private ByteBufferPool mByteBufferPool;
private Config mConfig;
public static class Config {
int bitRate;
public Config(int bitRate) {
this.bitRate = bitRate;
}
}
public AudioMediaEncoder(@NonNull Config config) {
mConfig = config;
}
@NonNull
@Override
String getName() {
return "AudioEncoder";
private ByteBuffer mZeroBuffer;
private final AudioTimestamp mTimestamp;
private AudioConfig mConfig;
private InputBufferPool mInputBufferPool = new InputBufferPool();
private final LinkedBlockingQueue<InputBuffer> mInputBufferQueue = new LinkedBlockingQueue<>();
// Just to debug performance.
private int mSendCount = 0;
private int mExecuteCount = 0;
private long mAvgSendDelay = 0;
private long mAvgExecuteDelay = 0;
private Map<Long, Long> mSendStartMap = new HashMap<>();
public AudioMediaEncoder(@NonNull AudioConfig config) {
super("AudioEncoder");
mConfig = config.copy();
mTimestamp = new AudioTimestamp(mConfig.byteRate());
// These two were in onPrepare() but it's better to do warm-up here
// since thread and looper creation is expensive.
mEncoder = new AudioEncodingThread();
mRecorder = new AudioRecordingThread();
}
@EncoderThread
@Override
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
final MediaFormat audioFormat = MediaFormat.createAudioFormat(MIME_TYPE, SAMPLING_FREQUENCY, CHANNELS_COUNT);
protected void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
final MediaFormat audioFormat = MediaFormat.createAudioFormat(
mConfig.mimeType,
mConfig.samplingFrequency,
mConfig.channels);
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_MASK, CHANNELS);
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, mConfig.bitRate);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, CHANNELS_COUNT);
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_MASK, mConfig.audioFormatChannels());
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, mConfig.bitRate); // TODO multiply by channels?
try {
mMediaCodec = MediaCodec.createEncoderByType(MIME_TYPE);
mMediaCodec = MediaCodec.createEncoderByType(mConfig.mimeType);
} catch (IOException e) {
throw new RuntimeException(e);
}
mMediaCodec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mMediaCodec.start();
mByteBufferPool = new ByteBufferPool(FRAME_SIZE, BUFFER_POOL_MAX_SIZE);
mEncoder = new AudioEncodingHandler();
mRecorder = new AudioRecordingThread();
mByteBufferPool = new ByteBufferPool(mConfig.frameSize(), mConfig.bufferPoolMaxSize());
mZeroBuffer = ByteBuffer.allocateDirect(mConfig.frameSize());
}
@EncoderThread
@Override
void onStart() {
protected void onStart() {
mRequestStop = false;
mRecorder.start();
mEncoder.start();
}
@EncoderThread
@Override
void onEvent(@NonNull String event, @Nullable Object data) { }
@EncoderThread
@Override
void onStop() {
protected void onStop() {
mRequestStop = true;
}
@Override
void onRelease() {
protected void onStopped() {
super.onStopped();
mRequestStop = false;
mEncoder = null;
mRecorder = null;
@ -138,25 +106,52 @@ public class AudioMediaEncoder extends MediaEncoder {
}
@Override
int getEncodedBitRate() {
protected int getEncodedBitRate() {
return mConfig.bitRate;
}
class AudioRecordingThread extends Thread {
/**
* Sleeps for some frames duration, to skip them. This can be used to slow down
* the recording operation to balance it with encoding.
*/
private void skipFrames(int frames) {
try {
Thread.sleep(AudioTimestamp.bytesToMillis(
mConfig.frameSize() * frames,
mConfig.byteRate()));
} catch (InterruptedException ignore) {}
}
/**
* A thread recording from microphone using {@link AudioRecord} class.
* Communicates with {@link AudioEncodingThread} using {@link #mInputBufferQueue}.
*/
private class AudioRecordingThread extends Thread {
private AudioRecord mAudioRecord;
private ByteBuffer mCurrentBuffer;
private int mReadBytes;
private long mLastTimeUs;
AudioRecordingThread() {
final int minBufferSize = AudioRecord.getMinBufferSize(SAMPLING_FREQUENCY, CHANNELS, ENCODING);
int bufferSize = FRAME_SIZE * 25; // Make this bigger so we don't skip frames.
private long mFirstTimeUs = Long.MIN_VALUE;
private AudioRecordingThread() {
final int minBufferSize = AudioRecord.getMinBufferSize(
mConfig.samplingFrequency,
mConfig.audioFormatChannels(),
mConfig.encoding);
// Make this bigger so we don't skip frames. 25: Stereo: 51200. Mono: 25600
// 25 is quite big already. Tried to make it bigger to solve the read() delay
// but it just makes things worse (ruins MONO as well).
// Tried to make it smaller and things change as well.
int bufferSize = mConfig.frameSize() * mConfig.audioRecordBufferFrames();
while (bufferSize < minBufferSize) {
bufferSize += FRAME_SIZE; // Unlikely I think.
bufferSize += mConfig.frameSize(); // Unlikely.
}
mAudioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,
SAMPLING_FREQUENCY, CHANNELS, ENCODING, bufferSize);
mConfig.samplingFrequency,
mConfig.audioFormatChannels(),
mConfig.encoding,
bufferSize);
setPriority(Thread.MAX_PRIORITY);
}
@ -179,17 +174,41 @@ public class AudioMediaEncoder extends MediaEncoder {
private void read(boolean endOfStream) {
mCurrentBuffer = mByteBufferPool.get();
if (mCurrentBuffer == null) {
LOG.e("read thread - eos:", endOfStream, "- Skipping audio frame, encoding is too slow.");
// Should fix the next presentation time here, but
// This can happen and it means that encoding is slow with respect to recording.
// One might be tempted to fix precisely the next frame presentation time when this happens,
// but this is not needed because the current increaseTime() algorithm will consider delays
// when they get large.
// Sleeping before returning is a good way of balancing the two operations.
// However, if endOfStream, we CAN'T lose this frame!
if (endOfStream) {
LOG.v("read thread - eos: true - No buffer, retrying.");
read(true); // try again
} else {
LOG.w("read thread - eos: false - Skipping audio frame, encoding is too slow.");
skipFrames(6); // sleep a bit
}
} else {
mCurrentBuffer.clear();
mReadBytes = mAudioRecord.read(mCurrentBuffer, FRAME_SIZE);
// When stereo, we read twice the data here and AudioRecord will fill the buffer
// with left and right bytes. https://stackoverflow.com/q/20594750/4288782
if (PERFORMANCE_DEBUG) {
long before = System.nanoTime();
mReadBytes = mAudioRecord.read(mCurrentBuffer, mConfig.frameSize());
long after = System.nanoTime();
float delayMillis = (after - before) / 1000000F;
float durationMillis = AudioTimestamp.bytesToMillis(mReadBytes, mConfig.byteRate());
LOG.v("read thread - reading took:", delayMillis,
"should be:", durationMillis,
"delay:", delayMillis - durationMillis);
} else {
mReadBytes = mAudioRecord.read(mCurrentBuffer, mConfig.frameSize());
}
LOG.i("read thread - eos:", endOfStream, "- Read new audio frame. Bytes:", mReadBytes);
if (mReadBytes > 0) { // Good read: increase PTS.
mLastTimeUs = increaseTime(mReadBytes);
LOG.i("read thread - eos:", endOfStream, "- Frame PTS:", mLastTimeUs);
increaseTime(mReadBytes, endOfStream);
LOG.i("read thread - eos:", endOfStream, "- mLastTimeUs:", mLastTimeUs);
mCurrentBuffer.limit(mReadBytes);
onBuffer(endOfStream);
enqueue(mCurrentBuffer, mLastTimeUs, endOfStream);
} else if (mReadBytes == AudioRecord.ERROR_INVALID_OPERATION) {
LOG.e("read thread - eos:", endOfStream, "- Got AudioRecord.ERROR_INVALID_OPERATION");
} else if (mReadBytes == AudioRecord.ERROR_BAD_VALUE) {
@ -199,174 +218,153 @@ public class AudioMediaEncoder extends MediaEncoder {
}
/**
* New data at position buffer.position() of size buffer.remaining()
* has been written into this buffer. This method should pass the data
* to the consumer.
* Increases presentation time and checks for max length constraint. This is much faster
* then waiting for the encoder to check it during {@link #drainOutput(boolean)}. We
* want to catch this as soon as possible so we stop recording useless frames and bother
* all the threads involved.
* @param readBytes bytes read in last reading
* @param endOfStream end of stream?
*/
private void onBuffer(boolean endOfStream) {
LOG.v("read thread - Sending buffer to encoder thread.");
mEncoder.sendInputBuffer(mCurrentBuffer, mLastTimeUs, endOfStream);
}
private long increaseTime(int readBytes) {
return increaseTime3(readBytes);
}
private void increaseTime(int readBytes, boolean endOfStream) {
// Get the latest frame timestamp.
mLastTimeUs = mTimestamp.increaseUs(readBytes);
if (mFirstTimeUs == Long.MIN_VALUE) {
mFirstTimeUs = mLastTimeUs;
// Compute the first frame milliseconds as well.
notifyFirstFrameMillis(System.currentTimeMillis()
- AudioTimestamp.bytesToMillis(readBytes, mConfig.byteRate()));
}
/**
* This method simply assumes that we read everything without losing a single US.
* It will use System.nanoTime() just once, as the starting point.
* Of course we don't as there are things going on in this thread.
*/
@SuppressWarnings("unused")
private long increaseTime1(int readBytes) {
return mLastTimeUs + bytesToUs(readBytes);
}
// See if we reached the max length value.
boolean didReachMaxLength = (mLastTimeUs - mFirstTimeUs) > getMaxLengthMillis() * 1000L;
if (didReachMaxLength && !endOfStream) {
LOG.w("read thread - this frame reached the maxLength! deltaUs:", mLastTimeUs - mFirstTimeUs);
notifyMaxLengthReached();
}
/**
* Just for testing, this method will use Api 24 method to retrieve the timestamp.
* This way we let the platform choose instead of making assumptions.
*/
@SuppressWarnings("unused")
@RequiresApi(24)
private long increaseTime2(int readBytes) {
if (mApi24Timestamp == null) {
mApi24Timestamp = new AudioTimestamp();
// Add zeroes if we have huge gaps. Even if timestamps are correct, if we have gaps between
// them, the encoder might shrink all timestamps to have a continuous audio. This results
// in a video that is fast-forwarded.
// Adding zeroes does not solve the gaps issue - audio will still be distorted. But at
// least we get a video that has the correct playback speed.
if (PERFORMANCE_FILL_GAPS) {
int gaps = mTimestamp.getGapCount(mConfig.frameSize());
if (gaps > 0) {
long gapStart = mTimestamp.getGapStartUs(mLastTimeUs);
long frameUs = AudioTimestamp.bytesToUs(mConfig.frameSize(), mConfig.byteRate());
LOG.w("read thread - GAPS: trying to add", gaps, "zeroed buffers");
for (int i = 0; i < gaps; i++) {
ByteBuffer zeroBuffer = mByteBufferPool.get();
if (zeroBuffer == null) {
LOG.e("read thread - GAPS: aborting because we have no free buffer.");
break;
}
;
zeroBuffer.position(0);
zeroBuffer.put(mZeroBuffer);
zeroBuffer.clear();
enqueue(zeroBuffer, gapStart, false);
gapStart += frameUs;
}
}
}
mAudioRecord.getTimestamp(mApi24Timestamp, AudioTimestamp.TIMEBASE_MONOTONIC);
return mApi24Timestamp.nanoTime / 1000;
}
private AudioTimestamp mApi24Timestamp;
/**
* This method looks like an improvement over {@link #increaseTime1(int)} as it
* accounts for the current time as well. Adapted & improved. from Kickflip.
*
* This creates regular timestamps unless we accumulate a lot of delay (greater than
* twice the buffer duration), in which case it creates a gap and starts again trying
* to be regular from the new point.
*/
private long increaseTime3(int readBytes) {
long bufferDurationUs = bytesToUs(readBytes);
long bufferEndTimeUs = System.nanoTime() / 1000; // now
long bufferStartTimeUs = bufferEndTimeUs - bufferDurationUs;
// If this is the first time, the base time is the buffer start time.
if (mBytesSinceBaseTime == 0) mBaseTimeUs = bufferStartTimeUs;
// Recompute time assuming that we are respecting the sampling frequency.
// This puts the time at the end of last read buffer, which means, where we
// should be if we had no delay / missed buffers.
long correctedTimeUs = mBaseTimeUs + bytesToUs(mBytesSinceBaseTime);
long correctionUs = bufferStartTimeUs - correctedTimeUs;
// However, if the correction is too big (> 2*bufferDurationUs), reset to this point.
// This is triggered if we lose buffers and are recording/encoding at a slower rate.
if (correctionUs >= 2L * bufferDurationUs) {
mBaseTimeUs = bufferStartTimeUs;
mBytesSinceBaseTime = readBytes;
return mBaseTimeUs;
} else {
mBytesSinceBaseTime += readBytes;
return correctedTimeUs;
private void enqueue(@NonNull ByteBuffer byteBuffer, long timestamp, boolean isEndOfStream) {
if (PERFORMANCE_DEBUG) {
mSendStartMap.put(timestamp, System.nanoTime() / 1000000);
}
int readBytes = byteBuffer.remaining();
InputBuffer inputBuffer = mInputBufferPool.get();
//noinspection ConstantConditions
inputBuffer.source = byteBuffer;
inputBuffer.timestamp = timestamp;
inputBuffer.length = readBytes;
inputBuffer.isEndOfStream = isEndOfStream;
mInputBufferQueue.add(inputBuffer);
}
private long mBaseTimeUs;
private long mBytesSinceBaseTime;
}
/**
* This will be a super busy thread. It's important for it to be:
* - different than the recording thread: or we would miss a lot of audio
* - different than the 'encoder' thread: we want that to be reactive.
* For example, a stop() must become onStop() soon, can't wait for all this draining.
* A thread encoding the microphone data using the media encoder APIs.
* Communicates with {@link AudioRecordingThread} using {@link #mInputBufferQueue}.
*
* We want to do this operation on a different thread than the recording one (to avoid
* losing frames while we're working here), and different than the {@link MediaEncoder}
* own thread (we want that to be reactive - stop() must become onStop() soon).
*/
@SuppressLint("HandlerLeak")
class AudioEncodingHandler extends Handler {
InputBufferPool mInputBufferPool = new InputBufferPool();
LinkedBlockingQueue<InputBuffer> mPendingOps = new LinkedBlockingQueue<>();
AudioEncodingHandler() {
super(WorkerHandler.get("AudioEncodingHandler").getLooper());
}
void sendInputBuffer(ByteBuffer buffer, long presentationTimeUs, boolean endOfStream) {
int presentation1 = (int) (presentationTimeUs >> 32);
int presentation2 = (int) (presentationTimeUs);
sendMessage(obtainMessage(endOfStream ? 1 : 0, presentation1, presentation2, buffer));
private class AudioEncodingThread extends Thread {
private AudioEncodingThread() {
setPriority(Thread.MAX_PRIORITY);
}
@Override
public void handleMessage(Message msg) {
super.handleMessage(msg);
boolean endOfStream = msg.what == 1;
long timestamp = (((long) msg.arg1) << 32) | (((long) msg.arg2) & 0xffffffffL);
LOG.i("encoding thread - got buffer. timestamp:", timestamp, "eos:", endOfStream);
ByteBuffer buffer = (ByteBuffer) msg.obj;
int readBytes = buffer.remaining();
InputBuffer inputBuffer = mInputBufferPool.get();
//noinspection ConstantConditions
inputBuffer.source = buffer;
inputBuffer.timestamp = timestamp;
inputBuffer.length = readBytes;
inputBuffer.isEndOfStream = endOfStream;
mPendingOps.add(inputBuffer);
performPendingOps(endOfStream);
}
private void performPendingOps(boolean force) {
LOG.i("encoding thread - performing", mPendingOps.size(), "pending operations. force:", force);
InputBuffer buffer;
while ((buffer = mPendingOps.peek()) != null) {
if (force) {
acquireInputBuffer(buffer);
performPendingOp(buffer);
} else if (tryAcquireInputBuffer(buffer)) {
performPendingOp(buffer);
public void run() {
encoding: while (true) {
if (mInputBufferQueue.isEmpty()) {
skipFrames(2);
} else {
break; // Will try later.
LOG.i("encoding thread - performing", mInputBufferQueue.size(), "pending operations.");
InputBuffer inputBuffer;
while ((inputBuffer = mInputBufferQueue.peek()) != null) {
// Performance logging
if (PERFORMANCE_DEBUG) {
long sendEnd = System.nanoTime() / 1000000;
Long sendStart = mSendStartMap.remove(inputBuffer.timestamp);
if (sendStart != null) {
mAvgSendDelay = ((mAvgSendDelay * mSendCount) + (sendEnd - sendStart)) / (++mSendCount);
LOG.v("send delay millis:", sendEnd - sendStart, "average:", mAvgSendDelay);
} else {
// This input buffer was already processed (but tryAcquire failed for now).
}
}
// Actual work
if (inputBuffer.isEndOfStream) {
acquireInputBuffer(inputBuffer);
encode(inputBuffer);
break encoding;
} else if (tryAcquireInputBuffer(inputBuffer)) {
encode(inputBuffer);
} else {
skipFrames(1);
}
}
}
}
// We got an end of stream.
mInputBufferPool.clear();
if (PERFORMANCE_DEBUG) {
// After latest changes, the count here is not so different between MONO and STEREO.
// We get about 400 frames in both cases (430 for MONO, but doesn't seem like a big issue).
LOG.e("EXECUTE DELAY MILLIS:", mAvgExecuteDelay, "COUNT:", mExecuteCount);
LOG.e("SEND DELAY MILLIS:", mAvgSendDelay, "COUNT:", mSendCount);
}
}
private void performPendingOp(InputBuffer buffer) {
private void encode(@NonNull InputBuffer buffer) {
long executeStart = System.nanoTime() / 1000000;
LOG.i("encoding thread - performing pending operation for timestamp:", buffer.timestamp, "- encoding.");
buffer.data.put(buffer.source); // TODO this copy is prob. the worst part here for performance
buffer.data.put(buffer.source); // NOTE: this copy is prob. the worst part here for performance
mByteBufferPool.recycle(buffer.source);
mPendingOps.remove(buffer);
mInputBufferQueue.remove(buffer);
encodeInputBuffer(buffer);
boolean eos = buffer.isEndOfStream;
mInputBufferPool.recycle(buffer);
if (eos) mInputBufferPool.clear();
LOG.i("encoding thread - performing pending operation for timestamp:", buffer.timestamp, "- draining.");
// NOTE: can consider calling this drainOutput on yet another thread, which would let us
// use an even smaller BUFFER_POOL_MAX_SIZE without losing audio frames. But this way
// we can accumulate delay on this new thread without noticing (no pool getting empty).
if (true) {
drainOutput(eos);
if (eos) WorkerHandler.get("AudioEncodingHandler").getThread().interrupt();
} else {
// Testing the option above.
WorkerHandler.get("AudioEncodingDrainer").remove(drainRunnable);
WorkerHandler.get("AudioEncodingDrainer").remove(drainRunnableEos);
WorkerHandler.get("AudioEncodingDrainer").post(eos ? drainRunnableEos : drainRunnable);
}
}
drainOutput(buffer.isEndOfStream);
private final Runnable drainRunnable = new Runnable() {
@Override
public void run() {
drainOutput(false);
if (PERFORMANCE_DEBUG) {
long executeEnd = System.nanoTime() / 1000000;
mAvgExecuteDelay = ((mAvgExecuteDelay * mExecuteCount) + (executeEnd - executeStart)) / (++mExecuteCount);
LOG.v("execute delay millis:", executeEnd - executeStart, "average:", mAvgExecuteDelay);
}
};
private final Runnable drainRunnableEos = new Runnable() {
@Override
public void run() {
drainOutput(true);
WorkerHandler.get("AudioEncodingHandler").getThread().interrupt();
WorkerHandler.get("AudioEncodingDrainer").getThread().interrupt();
}
};
}
}
}

@ -0,0 +1,105 @@
package com.otaliastudios.cameraview.video.encoding;
import android.util.Log;
/**
* Computes timestamps for audio frames.
* Video frames do not need this since the timestamp comes from
* the surface texture.
*
* This is independent from the channels count, as long as the read bytes include
* all channels and the byte rate accounts for this as well.
* If channels is 2, both values will be doubled and we behave the same.
*
* This class keeps track of gaps between frames.
* This can be used, for example, to write zeros instead of nothing.
*/
class AudioTimestamp {
static long bytesToUs(long bytes, int byteRate) {
return (1000000L * bytes) / byteRate;
}
static long bytesToMillis(long bytes, int byteRate) {
return (1000L * bytes) / byteRate;
}
private int mByteRate;
private long mBaseTimeUs;
private long mBytesSinceBaseTime;
private long mGapUs;
AudioTimestamp(int byteRate) {
mByteRate = byteRate;
}
/**
* This method accounts for the current time and proved to be the most reliable among
* the ones tested.
*
* This creates regular timestamps unless we accumulate a lot of delay (greater than
* twice the buffer duration), in which case it creates a gap and starts again trying
* to be regular from the new point.
*
* Returns timestamps in the {@link System#nanoTime()} reference.
*/
@SuppressWarnings("SameParameterValue")
long increaseUs(int readBytes) {
long bufferDurationUs = bytesToUs((long) readBytes, mByteRate);
long bufferEndTimeUs = System.nanoTime() / 1000; // now
long bufferStartTimeUs = bufferEndTimeUs - bufferDurationUs;
// If this is the first time, the base time is the buffer start time.
if (mBytesSinceBaseTime == 0) mBaseTimeUs = bufferStartTimeUs;
// Recompute time assuming that we are respecting the sampling frequency.
// This puts the time at the end of last read buffer, which means, where we
// should be if we had no delay / missed buffers.
long correctedTimeUs = mBaseTimeUs + bytesToUs(mBytesSinceBaseTime, mByteRate);
long correctionUs = bufferStartTimeUs - correctedTimeUs;
if (correctionUs >= 2L * bufferDurationUs) {
// However, if the correction is too big (> 2*bufferDurationUs), reset to this point.
// This is triggered if we lose buffers and are recording/encoding at a slower rate.
mBaseTimeUs = bufferStartTimeUs;
mBytesSinceBaseTime = readBytes;
mGapUs = correctionUs;
return mBaseTimeUs;
} else {
//noinspection StatementWithEmptyBody
if (correctionUs < 0) {
// This means that this method is being called too often, so that the expected start
// time for this buffer is BEFORE the last buffer end. So, respect the last buffer end
// instead.
}
mGapUs = 0;
mBytesSinceBaseTime += readBytes;
return correctedTimeUs;
}
}
/**
* Returns the number of gaps (meaning, missing frames) assuming that each
* frame has frameBytes size. Possibly 0.
*
* @param frameBytes size of standard frame
* @return number of gaps
*/
int getGapCount(int frameBytes) {
if (mGapUs == 0) return 0;
long durationUs = bytesToUs((long) frameBytes, mByteRate);
return (int) (mGapUs / durationUs);
}
/**
* Returns the timestamp of the first missing frame.
* Should be called only after {@link #getGapCount(int)} returns something
* greater than zero.
*
* @param lastTimeUs the last real frame timestamp
* @return the first missing frame timestamp
*/
long getGapStartUs(long lastTimeUs) {
return lastTimeUs - mGapUs;
}
}

@ -6,11 +6,12 @@ import java.nio.ByteBuffer;
* Represents an input buffer, which means,
* raw data that should be encoded by MediaCodec.
*/
class InputBuffer {
ByteBuffer data;
ByteBuffer source;
int index;
int length;
long timestamp;
boolean isEndOfStream;
@SuppressWarnings("WeakerAccess")
public class InputBuffer {
public ByteBuffer data;
public ByteBuffer source;
public int index;
public int length;
public long timestamp;
public boolean isEndOfStream;
}

@ -5,6 +5,7 @@ import android.media.MediaCodec;
import android.media.MediaFormat;
import android.os.Build;
import androidx.annotation.CallSuper;
import androidx.annotation.NonNull;
import androidx.annotation.Nullable;
import androidx.annotation.RequiresApi;
@ -16,10 +17,65 @@ import java.nio.ByteBuffer;
/**
* Base class for single-track encoders, coordinated by a {@link MediaEncoderEngine}.
* For the lifecycle of this class, read comments in the engine class.
*
* This class manages a background thread and streamlines events on this thread
* which we call the {@link EncoderThread}:
*
* 1. When {@link #prepare(MediaEncoderEngine.Controller, long)} is called, we call
* {@link #onPrepare(MediaEncoderEngine.Controller, long)} on the encoder thread.
*
* 2. When {@link #start()} is called, we call {@link #onStart()} on the encoder thread.
*
* 3. When {@link #notify(String, Object)} is called, we call {@link #onEvent(String, Object)}
* on the encoder thread.
*
* 4. After starting, encoders are free to acquire an input buffer with
* {@link #tryAcquireInputBuffer(InputBuffer)} or {@link #acquireInputBuffer(InputBuffer)}.
*
* 5. After getting the input buffer, they are free to fill it with data.
*
* 6. After filling it with data, they are required to call {@link #encodeInputBuffer(InputBuffer)}
* for encoding to take place.
*
* 7. After this happens, or at regular intervals, or whenever they want, encoders can then
* call {@link #drainOutput(boolean)} with a false parameter to fetch the encoded data
* and pass it to the engine (so it can be written to the muxer).
*
* 8. When {@link #stop()} is called - either by the engine user, or as a consequence of having
* called {@link MediaEncoderEngine.Controller#requestStop(int)} - we call
* {@link #onStop()} on the encoder thread.
*
* 9. The {@link #onStop()} implementation should, as fast as possible, stop reading, signal the
* end of input stream (there are two ways to do so), and finally call
* {@link #drainOutput(boolean)} for the last time, with a true parameter.
*
* 10. Once everything is drained, we will call {@link #onStopped()}, on a unspecified thread.
* There, subclasses can perform extra cleanup of their own resources.
*
* For VIDEO encoders, things are much easier because we skip the whole input part.
* See description in {@link VideoMediaEncoder}.
*
* MAX LENGTH CONSTRAINT
*
* For max length constraint, it will be checked automatically during {@link #drainOutput(boolean)},
* OR subclasses can provide an hint to this encoder using {@link #notifyMaxLengthReached()}.
* In this second case, we can request a stop at reading time, so we avoid useless readings
* in certain setups (where drain is called a lot after reading).
*
* TIMING
*
* Subclasses can use timestamps (in microseconds) in any reference system they prefer. For
* instance, it might be the {@link System#nanoTime()} reference, or some reference provided
* by SurfaceTextures.
*
* However, they are required to call {@link #notifyFirstFrameMillis(long)} and pass the
* milliseconds of the first frame in the {@link System#currentTimeMillis()} reference, so
* something that we can coordinate on.
*/
// https://github.com/saki4510t/AudioVideoRecordingSample/blob/master/app/src/main/java/com/serenegiant/encoder/MediaEncoder.java
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
abstract class MediaEncoder {
public abstract class MediaEncoder {
private final static String TAG = MediaEncoder.class.getSimpleName();
private final static CameraLogger LOG = CameraLogger.create(TAG);
@ -36,6 +92,20 @@ abstract class MediaEncoder {
// Can't go too high or this is a bottleneck for the audio encoder.
private final static int OUTPUT_TIMEOUT_US = 0;
private final static int STATE_NONE = 0;
private final static int STATE_PREPARING = 1;
private final static int STATE_PREPARED = 2;
private final static int STATE_STARTING = 3;
private final static int STATE_STARTED = 4;
// max timestamp was reached. we will keep draining but have asked the engine to stop us.
// this step can be skipped in case stop() is called from outside before a limit is reached.
private final static int STATE_LIMIT_REACHED = 5;
private final static int STATE_STOPPING = 6;
private final static int STATE_STOPPED = 7;
private int mState = STATE_NONE;
private final String mName;
@SuppressWarnings("WeakerAccess")
protected MediaCodec mMediaCodec;
@ -47,35 +117,65 @@ abstract class MediaEncoder {
private OutputBufferPool mOutputBufferPool;
private MediaCodec.BufferInfo mBufferInfo;
private MediaCodecBuffers mBuffers;
private long mMaxLengthMillis;
private boolean mMaxLengthReached;
private long mStartTimeMillis = 0; // In System.currentTimeMillis()
private long mStartTimeUs = Long.MIN_VALUE; // In unknown reference
private long mLastTimeUs = 0;
/**
* A readable name for the thread.
* Needs a readable name for the thread and for logging.
* @param name a name
*/
@NonNull
abstract String getName();
@SuppressWarnings("WeakerAccess")
protected MediaEncoder(@NonNull String name) {
mName = name;
}
private void setState(int newState) {
String newStateName = null;
switch (newState) {
case STATE_NONE: newStateName = "NONE"; break;
case STATE_PREPARING: newStateName = "PREPARING"; break;
case STATE_PREPARED: newStateName = "PREPARED"; break;
case STATE_STARTING: newStateName = "STARTING"; break;
case STATE_STARTED: newStateName = "STARTED"; break;
case STATE_LIMIT_REACHED: newStateName = "LIMIT_REACHED"; break;
case STATE_STOPPING: newStateName = "STOPPING"; break;
case STATE_STOPPED: newStateName = "STOPPED"; break;
}
LOG.w(mName, "setState:", newStateName);
mState = newState;
}
/**
* This encoder was attached to the engine. Keep the controller
* and run the internal thread.
*
* NOTE: it's important to call {@link WorkerHandler#post(Runnable)} instead of run()!
* The internal actions can cause a stop/release, and due to how {@link WorkerHandler#run(Runnable)}
* works, we might have {@link #onStop()} or {@link #onRelease()} to be executed before
* The internal actions can cause a stop, and due to how {@link WorkerHandler#run(Runnable)}
* works, we might have {@link #onStop()} or {@link #onStopped()} to be executed before
* the previous step has completed.
*/
final void prepare(@NonNull final MediaEncoderEngine.Controller controller, final long maxLengthMillis) {
if (mState >= STATE_PREPARING) {
LOG.e(mName, "Wrong state while preparing. Aborting.", mState);
return;
}
mController = controller;
mBufferInfo = new MediaCodec.BufferInfo();
mMaxLengthMillis = maxLengthMillis;
mWorker = WorkerHandler.get(getName());
LOG.i(getName(), "Prepare was called. Posting.");
mWorker = WorkerHandler.get(mName);
LOG.i(mName, "Prepare was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.i(getName(), "Prepare was called. Executing.");
LOG.i(mName, "Prepare was called. Executing.");
setState(STATE_PREPARING);
onPrepare(controller, maxLengthMillis);
setState(STATE_PREPARED);
}
});
}
@ -85,14 +185,22 @@ abstract class MediaEncoder {
* in case the encoder needs to wait for a certain event
* like a "frame available".
*
* The {@link #STATE_STARTED} state will be set when draining for the
* first time (not when onStart ends).
*
* NOTE: it's important to call {@link WorkerHandler#post(Runnable)} instead of run()!
*/
final void start() {
LOG.w(getName(), "Start was called. Posting.");
LOG.w(mName, "Start was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.w(getName(), "Start was called. Executing.");
if (mState < STATE_PREPARED || mState >= STATE_STARTING) {
LOG.e(mName, "Wrong state while starting. Aborting.", mState);
return;
}
setState(STATE_STARTING);
LOG.w(mName, "Start was called. Executing.");
onStart();
}
});
@ -108,27 +216,36 @@ abstract class MediaEncoder {
* @param data object
*/
final void notify(final @NonNull String event, final @Nullable Object data) {
LOG.v(getName(), "Notify was called. Posting.");
LOG.v(mName, "Notify was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.v(getName(), "Notify was called. Executing.");
LOG.v(mName, "Notify was called. Executing.");
onEvent(event, data);
}
});
}
/**
* Stop recording.
* Stop recording. This involves signaling the end of stream and draining
* all output left.
*
* The {@link #STATE_STOPPED} state will be set when draining for the
* last time (not when onStart ends).
*
* NOTE: it's important to call {@link WorkerHandler#post(Runnable)} instead of run()!
*/
final void stop() {
LOG.w(getName(), "Stop was called. Posting.");
if (mState >= STATE_STOPPING) {
LOG.e(mName, "Wrong state while stopping. Aborting.", mState);
return;
}
setState(STATE_STOPPING);
LOG.w(mName, "Stop was called. Posting.");
mWorker.post(new Runnable() {
@Override
public void run() {
LOG.w(getName(), "Stop was called. Executing.");
LOG.w(mName, "Stop was called. Executing.");
onStop();
}
});
@ -145,7 +262,7 @@ abstract class MediaEncoder {
* @param maxLengthMillis the maxLength in millis
*/
@EncoderThread
abstract void onPrepare(@NonNull final MediaEncoderEngine.Controller controller, final long maxLengthMillis);
protected abstract void onPrepare(@NonNull final MediaEncoderEngine.Controller controller, final long maxLengthMillis);
/**
* Start recording. This might be a lightweight operation
@ -153,7 +270,7 @@ abstract class MediaEncoder {
* like a "frame available".
*/
@EncoderThread
abstract void onStart();
protected abstract void onStart();
/**
* The caller notifying of a certain event occurring.
@ -162,38 +279,36 @@ abstract class MediaEncoder {
* @param data object
*/
@EncoderThread
abstract void onEvent(@NonNull String event, @Nullable Object data);
protected void onEvent(@NonNull String event, @Nullable Object data) {}
/**
* Stop recording.
* Stop recording. This involves signaling the end of stream and draining
* all output left.
*/
@EncoderThread
abstract void onStop();
protected abstract void onStop();
/**
* Called by {@link #drainOutput(boolean)} when we get an EOS signal (not necessarily in the
* parameters, might also be through an input buffer flag).
*
* This is a good moment to release all resources, although the muxer might still
* be alive (we wait for the other Encoder, see MediaEncoderEngine.Controller).
*/
private void release() {
LOG.w(getName(), "is being released. Notifying controller and releasing codecs.");
// TODO should we notify after this method?
mController.notifyReleased(mTrackIndex);
@CallSuper
protected void onStopped() {
LOG.w(mName, "is being released. Notifying controller and releasing codecs.");
// TODO should we call notifyStopped after this method ends?
mController.notifyStopped(mTrackIndex);
mMediaCodec.stop();
mMediaCodec.release();
mMediaCodec = null;
mOutputBufferPool.clear();
mOutputBufferPool = null;
mBuffers = null;
onRelease();
setState(STATE_STOPPED);
}
/**
* This is called when we are stopped.
* It is a good moment to release all resources, although the muxer
* might still be alive (we wait for the other Encoder, see Controller).
*/
abstract void onRelease();
/**
* Returns a new input buffer and index, waiting at most {@link #INPUT_TIMEOUT_US} if none is available.
* Callers should check the boolean result - true if the buffer was filled.
@ -234,7 +349,7 @@ abstract class MediaEncoder {
*/
@SuppressWarnings("WeakerAccess")
protected void encodeInputBuffer(InputBuffer buffer) {
LOG.v(getName(), "ENCODING - Buffer:", buffer.index, "Bytes:", buffer.length, "Presentation:", buffer.timestamp);
LOG.v(mName, "ENCODING - Buffer:", buffer.index, "Bytes:", buffer.length, "Presentation:", buffer.timestamp);
if (buffer.isEndOfStream) { // send EOS
mMediaCodec.queueInputBuffer(buffer.index, 0, 0,
buffer.timestamp, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
@ -244,16 +359,6 @@ abstract class MediaEncoder {
}
}
/**
* Signals the end of input stream. This is a Video only API, as in the normal case,
* we use input buffers to signal the end. In the video case, we don't have input buffers
* because we use an input surface instead.
*/
@SuppressWarnings("WeakerAccess")
protected void signalEndOfInputStream() {
mMediaCodec.signalEndOfInputStream();
}
/**
* Extracts all pending data that was written and encoded into {@link #mMediaCodec},
* and forwards it to the muxer.
@ -267,7 +372,7 @@ abstract class MediaEncoder {
@SuppressLint("LogNotTimber")
@SuppressWarnings("WeakerAccess")
protected void drainOutput(boolean drainAll) {
LOG.v(getName(), "DRAINING - EOS:", drainAll);
LOG.v(mName, "DRAINING - EOS:", drainAll);
if (mMediaCodec == null) {
LOG.e("drain() was called before prepare() or after releasing.");
return;
@ -289,7 +394,8 @@ abstract class MediaEncoder {
// should happen before receiving buffers, and should only happen once
if (mController.isStarted()) throw new RuntimeException("MediaFormat changed twice.");
MediaFormat newFormat = mMediaCodec.getOutputFormat();
mTrackIndex = mController.requestStart(newFormat);
mTrackIndex = mController.notifyStarted(newFormat);
setState(STATE_STARTED);
mOutputBufferPool = new OutputBufferPool(mTrackIndex);
} else if (encoderStatus < 0) {
LOG.e("Unexpected result from dequeueOutputBuffer: " + encoderStatus);
@ -301,25 +407,29 @@ abstract class MediaEncoder {
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
boolean isCodecConfig = (mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0;
if (!isCodecConfig && mController.isStarted() && mBufferInfo.size != 0) {
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
// Store startPresentationTime and lastPresentationTime, useful for example to
// detect the mMaxLengthReached and stop recording.
if (mStartPresentationTimeUs == Long.MIN_VALUE) {
mStartPresentationTimeUs = mBufferInfo.presentationTimeUs;
// Store mStartTimeUs and mLastTimeUs, useful to detect the max length
// reached and stop recording when needed.
if (mStartTimeUs == Long.MIN_VALUE) {
mStartTimeUs = mBufferInfo.presentationTimeUs;
LOG.w(mName, "DRAINING - Got the first presentation time:", mStartTimeUs);
}
mLastPresentationTimeUs = mBufferInfo.presentationTimeUs;
// Pass presentation times as offets with respect to the mStartPresentationTimeUs.
// This ensures consistency between audio pts (coming from System.nanoTime()) and
// video pts (coming from SurfaceTexture) both of which have no meaningful time-base
// and should be used for offsets only.
// TODO find a better way, this causes sync issues. (+ note: this sends pts=0 at first)
// mBufferInfo.presentationTimeUs = mLastPresentationTimeUs - mStartPresentationTimeUs;
LOG.v(getName(), "DRAINING - About to write(). Presentation:", mBufferInfo.presentationTimeUs);
// TODO fix the mBufferInfo being the same, then implement delayed writing in Controller
// and remove the isStarted() check here.
mLastTimeUs = mBufferInfo.presentationTimeUs;
// Adjust the presentation times. Subclasses can pass a presentation time in any
// reference system - possibly some that has no real meaning, and frequently,
// presentation times from different encoders have a different time-base.
// To address this, encoders are required to call notifyFirstFrameMillis
// so we can adjust here - moving to 1970 reference.
// Extra benefit: we never pass a pts equal to 0, which some encoders refuse.
mBufferInfo.presentationTimeUs = (mStartTimeMillis * 1000) + mLastTimeUs - mStartTimeUs;
// Write.
LOG.v(mName, "DRAINING - About to write(). Adjusted presentation:", mBufferInfo.presentationTimeUs);
OutputBuffer buffer = mOutputBufferPool.get();
//noinspection ConstantConditions
buffer.info = mBufferInfo;
@ -333,29 +443,76 @@ abstract class MediaEncoder {
// Not needed if drainAll because we already were asked to stop
if (!drainAll
&& !mMaxLengthReached
&& mStartPresentationTimeUs != Long.MIN_VALUE
&& mLastPresentationTimeUs - mStartPresentationTimeUs > mMaxLengthMillis * 1000) {
LOG.w(getName(), "DRAINING - Reached maxLength! mLastPresentationTimeUs:", mLastPresentationTimeUs,
"mStartPresentationTimeUs:", mStartPresentationTimeUs,
&& mStartTimeUs != Long.MIN_VALUE
&& mLastTimeUs - mStartTimeUs > mMaxLengthMillis * 1000) {
LOG.w(mName, "DRAINING - Reached maxLength! mLastTimeUs:", mLastTimeUs,
"mStartTimeUs:", mStartTimeUs,
"mMaxLengthUs:", mMaxLengthMillis * 1000);
mMaxLengthReached = true;
LOG.w(getName(), "DRAINING - Requesting a stop.");
mController.requestStop(mTrackIndex);
onMaxLengthReached();
break;
}
// Check for the EOS flag so we can release the encoder.
// Check for the EOS flag so we can call onStopped.
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
LOG.w(getName(), "DRAINING - Got EOS. Releasing the codec.");
release();
LOG.w(mName, "DRAINING - Got EOS. Releasing the codec.");
onStopped();
break;
}
}
}
}
private long mStartPresentationTimeUs = Long.MIN_VALUE;
private long mLastPresentationTimeUs = 0;
protected abstract int getEncodedBitRate();
/**
* Returns the max length setting, in milliseconds, which can be used
* to compute the current state and eventually call {@link #notifyMaxLengthReached()}.
* This is not a requirement for subclasses - we do this check anyway when draining,
* but doing so might be better.
*
* @return the max length setting
*/
@SuppressWarnings("WeakerAccess")
protected long getMaxLengthMillis() {
return mMaxLengthMillis;
}
/**
* Called by subclasses to notify that the max length was reached.
* We will move to {@link #STATE_LIMIT_REACHED} and request a stop.
*/
@SuppressWarnings("WeakerAccess")
protected void notifyMaxLengthReached() {
onMaxLengthReached();
}
/**
* Called by us (during {@link #drainOutput(boolean)}) or by subclasses
* (through {@link #notifyMaxLengthReached()}) to notify that we reached the
* max length allowed. We will move to {@link #STATE_LIMIT_REACHED} and request a stop.
*/
private void onMaxLengthReached() {
if (mMaxLengthReached) return;
mMaxLengthReached = true;
if (mState >= STATE_LIMIT_REACHED) {
LOG.w(mName, "onMaxLengthReached: Reached in wrong state. Aborting.", mState);
} else {
LOG.w(mName, "onMaxLengthReached: Requesting a stop.");
setState(STATE_LIMIT_REACHED);
mController.requestStop(mTrackIndex);
}
}
abstract int getEncodedBitRate();
/**
* Should be called by subclasses to pass the milliseconds of the first frame - as soon
* as this information is available. The milliseconds should be in the
* {@link System#currentTimeMillis()} reference system, so we can coordinate between different
* encoders.
*
* @param firstFrameMillis the milliseconds of the first frame presentation
*/
@SuppressWarnings("WeakerAccess")
protected void notifyFirstFrameMillis(long firstFrameMillis) {
mStartTimeMillis = firstFrameMillis;
}
}

@ -1,8 +1,10 @@
package com.otaliastudios.cameraview.video.encoding;
import android.annotation.SuppressLint;
import android.media.MediaFormat;
import android.media.MediaMuxer;
import android.os.Build;
import android.text.format.DateFormat;
import com.otaliastudios.cameraview.CameraLogger;
@ -13,9 +15,42 @@ import androidx.annotation.RequiresApi;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* The entry point for encoding video files.
*
* The external API is simple but the internal mechanism is not easy. Basically the engine
* controls a {@link MediaEncoder} instance for each track (e.g. one for video, one for audio).
*
* 1. We prepare the MediaEncoders: {@link MediaEncoder#prepare(Controller, long)}
* MediaEncoders can be prepared synchronously or not.
*
* 2. Someone calls {@link #start()} from any thread.
* As a consequence, we start the MediaEncoders: {@link MediaEncoder#start()}.
*
* 3. MediaEncoders do not start synchronously. Instead, they call
* {@link Controller#notifyStarted(MediaFormat)} when they have a legit format,
* and we keep track of who has started.
*
* 4. When all MediaEncoders have started, we actually start the muxer.
*
* 5. Someone calls {@link #stop()} from any thread.
* As a consequence, we stop the MediaEncoders: {@link MediaEncoder#stop()}.
*
* 6. MediaEncoders do not stop synchronously. Instead, they will stop reading but
* keep draining the codec until there's no data left. At that point, they can
* call {@link Controller#notifyStopped(int)}.
*
* 7. When all MediaEncoders have been released, we actually stop the muxer and notify.
*
* There is another possibility where MediaEncoders themselves want to stop, for example
* because they reach some limit or constraint (e.g. max duration). For this, they should
* call {@link Controller#requestStop(int)}. Once all MediaEncoders have stopped, we will
* actually call {@link #stop()} on ourselves.
*/
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
public class MediaEncoderEngine {
@ -33,7 +68,17 @@ public class MediaEncoderEngine {
void onEncodingStart();
/**
* Called when encoding stopped for some reason.
* Called when encoding stopped. At this point the mxuer might still be processing,
* but we have stopped receiving input (recording video and audio frames).
*
* The {@link #onEncodingEnd(int, Exception)} callback will soon be called
* with the results.
*/
@EncoderThread
void onEncodingStop();
/**
* Called when encoding ended for some reason.
* If there's an exception, it failed.
* @param reason the reason
* @param e the error, if present
@ -44,13 +89,14 @@ public class MediaEncoderEngine {
private final static String TAG = MediaEncoderEngine.class.getSimpleName();
private final static CameraLogger LOG = CameraLogger.create(TAG);
private static final boolean DEBUG_PERFORMANCE = true;
@SuppressWarnings("WeakerAccess")
public final static int END_BY_USER = 0;
public final static int END_BY_MAX_DURATION = 1;
public final static int END_BY_MAX_SIZE = 2;
private ArrayList<MediaEncoder> mEncoders;
private List<MediaEncoder> mEncoders;
private MediaMuxer mMediaMuxer;
private int mStartedEncodersCount;
private int mReleasedEncodersCount;
@ -148,7 +194,7 @@ public class MediaEncoderEngine {
/**
* Asks encoders to stop. This is not sync, of course we will ask for encoders
* to call {@link Controller#notifyReleased(int)} before actually stop the muxer.
* to call {@link Controller#notifyStopped(int)} before actually stop the muxer.
* When all encoders request a release, {@link #end()} is called to do cleanup
* and notify the listener.
*/
@ -160,7 +206,7 @@ public class MediaEncoderEngine {
}
/**
* Called after all encoders have requested a release using {@link Controller#notifyReleased(int)}.
* Called after all encoders have requested a release using {@link Controller#notifyStopped(int)}.
* At this point we will do cleanup and notify the listener.
*/
private void end() {
@ -217,7 +263,8 @@ public class MediaEncoderEngine {
* A handle for {@link MediaEncoder}s to pass information to this engine.
* All methods here can be called for multiple threads.
*/
class Controller {
@SuppressWarnings("WeakerAccess")
public class Controller {
/**
* Request that the muxer should start. This is not guaranteed to be executed:
@ -225,15 +272,15 @@ public class MediaEncoderEngine {
* @param format the media format
* @return the encoder track index
*/
int requestStart(@NonNull MediaFormat format) {
public int notifyStarted(@NonNull MediaFormat format) {
synchronized (mControllerLock) {
if (mMediaMuxerStarted) {
throw new IllegalStateException("Trying to start but muxer started already");
}
int track = mMediaMuxer.addTrack(format);
LOG.w("requestStart:", "Assigned track", track, "to format", format.getString(MediaFormat.KEY_MIME));
LOG.w("notifyStarted:", "Assigned track", track, "to format", format.getString(MediaFormat.KEY_MIME));
if (++mStartedEncodersCount == mEncoders.size()) {
LOG.w("requestStart:", "All encoders have started. Starting muxer and dispatching onEncodingStart().");
LOG.w("notifyStarted:", "All encoders have started. Starting muxer and dispatching onEncodingStart().");
mMediaMuxer.start();
mMediaMuxerStarted = true;
if (mListener != null) {
@ -245,41 +292,58 @@ public class MediaEncoderEngine {
}
/**
* Whether the muxer is started.
* Whether the muxer is started. MediaEncoders are required to avoid
* calling {@link #write(OutputBufferPool, OutputBuffer)} until this method returns true.
*
* @return true if muxer was started
*/
boolean isStarted() {
public boolean isStarted() {
synchronized (mControllerLock) {
return mMediaMuxerStarted;
}
}
@SuppressLint("UseSparseArrays")
private Map<Integer, Integer> mDebugCount = new HashMap<>();
/**
* Writes the given data to the muxer. Should be called after {@link #isStarted()}
* returns true. Note: this seems to be thread safe, no lock.
* TODO cache values if not started yet, then apply later. Read comments in drain().
* Currently they are recycled instantly.
*/
void write(@NonNull OutputBufferPool pool, @NonNull OutputBuffer buffer) {
public void write(@NonNull OutputBufferPool pool, @NonNull OutputBuffer buffer) {
if (!mMediaMuxerStarted) {
throw new IllegalStateException("Trying to write before muxer started");
}
// This is a bad idea and causes crashes.
// if (info.presentationTimeUs < mLastTimestampUs) info.presentationTimeUs = mLastTimestampUs;
// mLastTimestampUs = info.presentationTimeUs;
LOG.v("write:", "Writing OutputBuffer - track:", buffer.trackIndex, "presentation:", buffer.info.presentationTimeUs);
if (DEBUG_PERFORMANCE) {
// When AUDIO = mono, this is called about twice the time. (200 vs 100 for 5 sec).
Integer count = mDebugCount.get(buffer.trackIndex);
mDebugCount.put(buffer.trackIndex, count == null ? 1 : ++count);
Calendar calendar = Calendar.getInstance();
calendar.setTimeInMillis(buffer.info.presentationTimeUs / 1000);
LOG.v("write:", "Writing into muxer -",
"track:", buffer.trackIndex,
"presentation:", buffer.info.presentationTimeUs,
"readable:", calendar.get(Calendar.SECOND) + ":" + calendar.get(Calendar.MILLISECOND),
"count:", count);
} else {
LOG.v("write:", "Writing into muxer -",
"track:", buffer.trackIndex,
"presentation:", buffer.info.presentationTimeUs);
}
mMediaMuxer.writeSampleData(buffer.trackIndex, buffer.data, buffer.info);
pool.recycle(buffer);
}
/**
* Requests that the engine stops. This is not executed until all encoders call
* this method, so it is a kind of soft request, just like {@link #requestStart(MediaFormat)}.
* this method, so it is a kind of soft request, just like {@link #notifyStarted(MediaFormat)}.
* To be used when maxLength / maxSize constraints are reached, for example.
*
* When this succeeds, {@link MediaEncoder#stop()} is called.
*/
void requestStop(int track) {
public void requestStop(int track) {
synchronized (mControllerLock) {
LOG.w("requestStop:", "Called for track", track);
if (--mStartedEncodersCount == 0) {
@ -294,11 +358,14 @@ public class MediaEncoderEngine {
* Notifies that the encoder was stopped. After this is called by all encoders,
* we will actually stop the muxer.
*/
void notifyReleased(int track) {
public void notifyStopped(int track) {
synchronized (mControllerLock) {
LOG.w("notifyReleased:", "Called for track", track);
LOG.w("notifyStopped:", "Called for track", track);
if (++mReleasedEncodersCount == mEncoders.size()) {
LOG.w("requestStop:", "All encoders have been released. Stopping the muxer.");
if (mListener != null) {
mListener.onEncodingStop();
}
end();
}
}

@ -9,8 +9,9 @@ import java.nio.ByteBuffer;
* an encoded buffer of data that should be passed
* to the muxer.
*/
class OutputBuffer {
MediaCodec.BufferInfo info;
int trackIndex;
ByteBuffer data;
@SuppressWarnings("WeakerAccess")
public class OutputBuffer {
public MediaCodec.BufferInfo info;
public int trackIndex;
public ByteBuffer data;
}

@ -0,0 +1,38 @@
package com.otaliastudios.cameraview.video.encoding;
import android.opengl.EGLContext;
import androidx.annotation.NonNull;
/**
* Video configuration to be passed as input to the constructor
* of a {@link TextureMediaEncoder}.
*/
public class TextureConfig extends VideoConfig {
private final static int NO_TEXTURE = Integer.MIN_VALUE;
public int textureId = NO_TEXTURE;
public int overlayTextureId = NO_TEXTURE;
public int overlayRotation;
public float scaleX;
public float scaleY;
public EGLContext eglContext;
@NonNull
TextureConfig copy() {
TextureConfig copy = new TextureConfig();
copy(copy);
copy.textureId = this.textureId;
copy.overlayTextureId = this.overlayTextureId;
copy.overlayRotation = this.overlayRotation;
copy.scaleX = this.scaleX;
copy.scaleY = this.scaleY;
copy.eglContext = this.eglContext;
return copy;
}
boolean hasOverlay() {
return overlayTextureId != NO_TEXTURE;
}
}

@ -1,6 +1,6 @@
package com.otaliastudios.cameraview.video.encoding;
import android.opengl.EGLContext;
import android.graphics.SurfaceTexture;
import android.opengl.Matrix;
import android.os.Build;
@ -18,68 +18,64 @@ import androidx.annotation.RequiresApi;
* Default implementation for video encoding.
*/
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
public class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.Config> {
public class TextureMediaEncoder extends VideoMediaEncoder<TextureConfig> {
private static final String TAG = TextureMediaEncoder.class.getSimpleName();
private static final CameraLogger LOG = CameraLogger.create(TAG);
public final static String FRAME_EVENT = "frame";
public final static int NO_TEXTURE = Integer.MIN_VALUE;
public static class Config extends VideoMediaEncoder.Config {
int textureId;
int overlayTextureId;
float scaleX;
float scaleY;
EGLContext eglContext;
int transformRotation;
int overlayTransformRotation;
public Config(int width, int height,
int bitRate, int frameRate,
int rotation, @NonNull String mimeType,
int textureId,
float scaleX, float scaleY,
@NonNull EGLContext eglContext,
int overlayTextureId, int overlayRotation) {
// We rotate the texture using transformRotation. Pass rotation=0 to super so that
// no rotation metadata is written into the output file.
super(width, height, bitRate, frameRate, 0, mimeType);
this.transformRotation = rotation;
this.textureId = textureId;
this.scaleX = scaleX;
this.scaleY = scaleY;
this.eglContext = eglContext;
this.overlayTextureId = overlayTextureId;
this.overlayTransformRotation = overlayRotation;
}
}
private int mTransformRotation;
private EglCore mEglCore;
private EglWindowSurface mWindow;
private EglViewport mViewport;
private Pool<TextureFrame> mFramePool = new Pool<>(100, new Pool.Factory<TextureFrame>() {
private Pool<Frame> mFramePool = new Pool<>(100, new Pool.Factory<Frame>() {
@Override
public TextureFrame create() {
return new TextureFrame();
public Frame create() {
return new Frame();
}
});
public TextureMediaEncoder(@NonNull Config config) {
super(config);
public TextureMediaEncoder(@NonNull TextureConfig config) {
super(config.copy());
}
public static class TextureFrame {
private TextureFrame() {}
// Nanoseconds, in no meaningful time-base. Should be for offsets only.
// Typically coming from SurfaceTexture.getTimestamp().
/**
* Should be acquired with {@link #acquireFrame()}, filled and then passed
* to {@link MediaEncoderEngine#notify(String, Object)} with {@link #FRAME_EVENT}.
*/
public static class Frame {
private Frame() {}
/**
* Nanoseconds, in no meaningful time-base. Will be used for offsets only.
* Typically this comes from {@link SurfaceTexture#getTimestamp()}.
*/
public long timestamp;
/**
* Milliseconds in the {@link System#currentTimeMillis()} reference.
* This is actually needed/read only for the first frame.
*/
public long timestampMillis;
/**
* The transformation matrix for the base texture.
*/
public float[] transform = new float[16];
/**
* The transformation matrix for the overlay texture, if any.
*/
public float[] overlayTransform = new float[16];
}
/**
* Returns a new frame to be filled. See {@link Frame} for details.
* @return a new frame
*/
@NonNull
public TextureFrame acquireFrame() {
public Frame acquireFrame() {
if (mFramePool.isEmpty()) {
throw new RuntimeException("Need more frames than this! Please increase the pool size.");
} else {
@ -88,10 +84,13 @@ public class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.C
}
}
@EncoderThread
@Override
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
protected void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
// We rotate the texture using transformRotation. Pass rotation=0 to super so that
// no rotation metadata is written into the output file.
mTransformRotation = mConfig.rotation;
mConfig.rotation = 0;
super.onPrepare(controller, maxLengthMillis);
mEglCore = new EglCore(mConfig.eglContext, EglCore.FLAG_RECORDABLE);
mWindow = new EglWindowSurface(mEglCore, mSurface, true);
@ -102,28 +101,33 @@ public class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.C
@EncoderThread
@Override
void onStart() {
super.onStart();
// Nothing to do here. Waiting for the first frame.
}
@EncoderThread
@Override
void onEvent(@NonNull String event, @Nullable Object data) {
protected void onEvent(@NonNull String event, @Nullable Object data) {
if (!event.equals(FRAME_EVENT)) return;
TextureFrame frame = (TextureFrame) data;
if (frame == null) return; // Should not happen
if (frame.timestamp == 0 || mFrameNum < 0) {
// The first condition comes from grafika.
// The second condition means we were asked to stop.
Frame frame = (Frame) data;
if (frame == null) {
throw new IllegalArgumentException("Got null frame for FRAME_EVENT.");
}
if (frame.timestamp == 0) { // grafika
mFramePool.recycle(frame);
return;
}
if (mFrameNumber < 0) { // We were asked to stop.
mFramePool.recycle(frame);
return;
}
mFrameNumber++;
if (mFrameNumber == 1) {
notifyFirstFrameMillis(frame.timestampMillis);
}
// First, drain any previous data.
LOG.i("onEvent", "frameNumber:", mFrameNumber, "timestamp:", frame.timestamp, "- draining.");
drainOutput(false);
// Then draw on the surface.
LOG.i("onEvent", "frameNumber:", mFrameNumber, "timestamp:", frame.timestamp, "- drawing.");
mFrameNum++;
int thisFrameNum = mFrameNum;
LOG.v("onEvent", "frameNum:", thisFrameNum, "realFrameNum:", mFrameNum, "timestamp:", frame.timestamp);
// We must scale this matrix like GlCameraPreview does, because it might have some cropping.
// 1. We must scale this matrix like GlCameraPreview does, because it might have some cropping.
// Scaling takes place with respect to the (0, 0, 0) point, so we must apply a Translation to compensate.
float[] transform = frame.transform;
float[] overlayTransform = frame.overlayTransform;
@ -134,36 +138,32 @@ public class TextureMediaEncoder extends VideoMediaEncoder<TextureMediaEncoder.C
Matrix.translateM(transform, 0, scaleTranslX, scaleTranslY, 0);
Matrix.scaleM(transform, 0, scaleX, scaleY, 1);
// We also must rotate this matrix. In GlCameraPreview it is not needed because it is a live
// 2. We also must rotate this matrix. In GlCameraPreview it is not needed because it is a live
// stream, but the output video, must be correctly rotated based on the device rotation at the moment.
// Rotation also takes place with respect to the origin (the Z axis), so we must
// translate to origin, rotate, then back to where we were.
Matrix.translateM(transform, 0, 0.5F, 0.5F, 0);
Matrix.rotateM(transform, 0, mConfig.transformRotation, 0, 0, 1);
Matrix.rotateM(transform, 0, mTransformRotation, 0, 0, 1);
Matrix.translateM(transform, 0, -0.5F, -0.5F, 0);
boolean hasOverlay = mConfig.overlayTextureId != NO_TEXTURE;
if (hasOverlay) {
// 3. Do the same for overlays with their own rotation.
if (mConfig.hasOverlay()) {
Matrix.translateM(overlayTransform, 0, 0.5F, 0.5F, 0);
Matrix.rotateM(overlayTransform, 0, mConfig.overlayTransformRotation, 0, 0, 1);
Matrix.rotateM(overlayTransform, 0, mConfig.overlayRotation, 0, 0, 1);
Matrix.translateM(overlayTransform, 0, -0.5F, -0.5F, 0);
}
LOG.v("onEvent", "frameNum:", thisFrameNum, "realFrameNum:", mFrameNum, "calling drainOutput.");
drainOutput(false);
LOG.v("onEvent", "frameNum:", thisFrameNum, "realFrameNum:", mFrameNum, "calling drawFrame.");
mViewport.drawFrame(mConfig.textureId, transform);
if (hasOverlay) {
if (mConfig.hasOverlay()) {
mViewport.drawFrame(mConfig.overlayTextureId, overlayTransform);
}
mWindow.setPresentationTime(frame.timestamp);
mWindow.swapBuffers();
mFramePool.recycle(frame);
}
@Override
void onRelease() {
protected void onStopped() {
super.onStopped();
mFramePool.clear();
if (mWindow != null) {
mWindow.release();

@ -0,0 +1,25 @@
package com.otaliastudios.cameraview.video.encoding;
import androidx.annotation.NonNull;
/**
* Base video configuration to be passed as input to the constructor
* of a {@link VideoMediaEncoder}.
*/
public class VideoConfig {
public int width;
public int height;
public int bitRate;
public int frameRate;
public int rotation;
public String mimeType;
protected <C extends VideoConfig> void copy(@NonNull C output) {
output.width = this.width;
output.height = this.height;
output.bitRate = this.bitRate;
output.frameRate = this.frameRate;
output.rotation = this.rotation;
output.mimeType = this.mimeType;
}
}

@ -14,13 +14,22 @@ import com.otaliastudios.cameraview.CameraLogger;
import java.io.IOException;
/**
* This alone does nothing.
* Subclasses must make sure they write each frame onto the given Surface {@link #mSurface}.
* Base class for video encoding.
*
* This uses {@link MediaCodec#createInputSurface()} to create an input {@link Surface}
* into which we can write and that MediaCodec itself can read.
*
* This makes everything easier with respect to the process explained in {@link MediaEncoder}
* docs. We can skip the whole input part of acquiring an InputBuffer, filling it with data
* and returning it to the encoder with {@link #encodeInputBuffer(InputBuffer)}.
*
* All of this is automatically done by MediaCodec as long as we keep writing data into the
* given {@link Surface}. This class alone does not do this - subclasses are required to do so.
*
* @param <C> the config object.
*/
@RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN_MR2)
abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends MediaEncoder {
abstract class VideoMediaEncoder<C extends VideoConfig> extends MediaEncoder {
private static final String TAG = VideoMediaEncoder.class.getSimpleName();
private static final CameraLogger LOG = CameraLogger.create(TAG);
@ -32,39 +41,16 @@ abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends Med
protected Surface mSurface;
@SuppressWarnings("WeakerAccess")
protected int mFrameNum = -1;
static class Config {
int width;
int height;
int bitRate;
int frameRate;
int rotation;
String mimeType;
Config(int width, int height, int bitRate, int frameRate, int rotation, @NonNull String mimeType) {
this.width = width;
this.height = height;
this.bitRate = bitRate;
this.frameRate = frameRate;
this.rotation = rotation;
this.mimeType = mimeType;
}
}
protected int mFrameNumber = -1;
VideoMediaEncoder(@NonNull C config) {
super("VideoEncoder");
mConfig = config;
}
@NonNull
@Override
String getName() {
return "VideoEncoder";
}
@EncoderThread
@Override
void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
protected void onPrepare(@NonNull MediaEncoderEngine.Controller controller, long maxLengthMillis) {
MediaFormat format = MediaFormat.createVideoFormat(mConfig.mimeType, mConfig.width, mConfig.height);
// Set some properties. Failing to specify some of these can cause the MediaCodec
@ -89,22 +75,25 @@ abstract class VideoMediaEncoder<C extends VideoMediaEncoder.Config> extends Med
@EncoderThread
@Override
void onStart() {
protected void onStart() {
// Nothing to do here. Waiting for the first frame.
mFrameNum = 0;
mFrameNumber = 0;
}
@EncoderThread
@Override
void onStop() {
LOG.i("onStop", "setting mFrameNum to 1 and signaling the end of input stream.");
mFrameNum = -1;
signalEndOfInputStream();
protected void onStop() {
LOG.i("onStop", "setting mFrameNumber to 1 and signaling the end of input stream.");
mFrameNumber = -1;
// Signals the end of input stream. This is a Video only API, as in the normal case,
// we use input buffers to signal the end. In the video case, we don't have input buffers
// because we use an input surface instead.
mMediaCodec.signalEndOfInputStream();
drainOutput(true);
}
@Override
int getEncodedBitRate() {
protected int getEncodedBitRate() {
return mConfig.bitRate;
}
}

@ -102,6 +102,8 @@
<attr name="cameraAudio" format="enum">
<enum name="off" value="0" />
<enum name="on" value="1" />
<enum name="mono" value="2" />
<enum name="stereo" value="3" />
</attr>
<attr name="cameraGrid" format="enum">

@ -0,0 +1,168 @@
package com.otaliastudios.cameraview.internal.utils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
import static junit.framework.Assert.assertNotNull;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertNull;
public class PoolTest {
private final static int MAX_SIZE = 20;
private class Item {}
private Pool<Item> pool;
private int instances = 0;
@Before
public void setUp() {
pool = new Pool<>(MAX_SIZE, new Pool.Factory<Item>() {
@Override
public Item create() {
instances++;
return new Item();
}
});
}
@After
public void tearDown() {
instances = 0;
pool = null;
}
@Test
public void testInstances() {
for (int i = 0; i < MAX_SIZE; i++) {
assertEquals(instances, i);
pool.get();
}
}
@Test
public void testIsEmtpy() {
assertFalse(pool.isEmpty());
// Get all items without recycling.
Item item = null;
for (int i = 0; i < MAX_SIZE; i++) {
item = pool.get();
}
assertTrue(pool.isEmpty());
}
@Test
public void testClear() {
// Take one and recycle it
Item item = pool.get();
assertNotNull(item);
pool.recycle(item);
// Ensure it is recycled.
assertEquals(pool.recycledCount(), 1);
assertEquals(pool.activeCount(), 0);
assertEquals(pool.count(), 1);
// Now clear and ensure pool is empty.
pool.clear();
assertEquals(pool.recycledCount(), 0);
assertEquals(pool.activeCount(), 0);
assertEquals(pool.count(), 0);
}
@Test
public void testCounts() {
assertEquals(pool.recycledCount(), 0);
assertEquals(pool.activeCount(), 0);
assertEquals(pool.count(), 0);
// Take all
List<Item> items = new ArrayList<>();
for (int i = 0; i < MAX_SIZE; i++) {
items.add(pool.get());
assertEquals(pool.recycledCount(), 0);
assertEquals(pool.activeCount(), items.size());
assertEquals(pool.count(), items.size());
}
// Recycle all
int recycled = 0;
for (Item item : items) {
pool.recycle(item);
recycled++;
assertEquals(pool.recycledCount(), recycled);
assertEquals(pool.activeCount(), MAX_SIZE - recycled);
assertEquals(pool.count(), MAX_SIZE);
}
}
@Test
public void testToString() {
String string = pool.toString();
assertTrue(string.contains("count"));
assertTrue(string.contains("active"));
assertTrue(string.contains("recycled"));
assertTrue(string.contains(Pool.class.getSimpleName()));
}
@Test(expected = IllegalStateException.class)
public void testRecycle_notActive() {
Item item = new Item();
pool.recycle(item);
}
@Test(expected = IllegalStateException.class)
public void testRecycle_twice() {
Item item = pool.get();
assertNotNull(item);
pool.recycle(item);
pool.recycle(item);
}
@Test(expected = IllegalStateException.class)
public void testRecycle_whileFull() {
// Take all and recycle all
List<Item> items = new ArrayList<>();
for (int i = 0; i < MAX_SIZE; i++) {
items.add(pool.get());
}
for (Item item : items) {
pool.recycle(item);
}
// Take one and recycle again
pool.recycle(items.get(0));
}
@Test
public void testGet_fromFactory() {
pool.get();
assertEquals(1, instances);
}
@Test
public void testGet_whenFull() {
for (int i = 0; i < MAX_SIZE; i++) {
pool.get();
}
assertNull(pool.get());
}
@Test
public void testGet_recycled() {
Item item = pool.get();
assertNotNull(item);
pool.recycle(item);
Item newItem = pool.get();
assertEquals(item, newItem);
assertEquals(1, instances);
}
}

@ -35,6 +35,8 @@ import java.util.List;
public class CameraActivity extends AppCompatActivity implements View.OnClickListener, OptionView.Callback {
private final static CameraLogger LOG = CameraLogger.create("DemoApp");
private CameraView camera;
private ViewGroup controlPanel;
private long mCaptureTime;
@ -134,9 +136,14 @@ public class CameraActivity extends AppCompatActivity implements View.OnClickLis
animator.start();
}
private void message(String content, boolean important) {
int length = important ? Toast.LENGTH_LONG : Toast.LENGTH_SHORT;
Toast.makeText(this, content, length).show();
private void message(@NonNull String content, boolean important) {
if (important) {
LOG.w(content);
Toast.makeText(this, content, Toast.LENGTH_LONG).show();
} else {
LOG.i(content);
Toast.makeText(this, content, Toast.LENGTH_SHORT).show();
}
}
private class Listener extends CameraListener {
@ -170,7 +177,7 @@ public class CameraActivity extends AppCompatActivity implements View.OnClickLis
PicturePreviewActivity.setPictureResult(result);
Intent intent = new Intent(CameraActivity.this, PicturePreviewActivity.class);
intent.putExtra("delay", callbackTime - mCaptureTime);
Log.e("CameraActivity", "Picture delay: " + (callbackTime - mCaptureTime));
LOG.w("Picture delay:", callbackTime - mCaptureTime);
startActivity(intent);
mCaptureTime = 0;
}
@ -182,6 +189,12 @@ public class CameraActivity extends AppCompatActivity implements View.OnClickLis
Intent intent = new Intent(CameraActivity.this, VideoPreviewActivity.class);
startActivity(intent);
}
@Override
public void onVideoRecordingStart() {
super.onVideoRecordingStart();
LOG.w("onVideoRecordingStart!");
}
}
@Override

@ -17,7 +17,7 @@
android:layout_marginBottom="88dp"
android:keepScreenOn="true"
app:cameraExperimental="true"
app:cameraEngine="camera1"
app:cameraEngine="camera2"
app:cameraPreview="glSurface"
app:cameraPlaySounds="true"
app:cameraGrid="off"

@ -39,6 +39,8 @@ camera.addCameraListener(new CameraListener() {
public void onExposureCorrectionChanged(float newValue, float[] bounds, PointF[] fingers) {}
public void onVideoRecordingStart() {}
public void onVideoRecordingEnd() {}
});
```

@ -59,7 +59,7 @@ Please note that the video snaphot features requires:
This is allowed at the following conditions:
- `takePictureSnapshot()` is used (no HQ pictures)
- the OpenGL preview is used (see [previews](previews.html))
- the `GL_SURFACE` preview is used (see [previews](previews.html))
### Related XML attributes
@ -68,6 +68,35 @@ This is allowed at the following conditions:
app:cameraMode="picture|video"/>
```
### Related callbacks
```java
camera.addCameraListener(new CameraListener() {
@Override
public void onPictureTaken(@NonNull PictureResult result) {
// A Picture was taken!
}
@Override
public void onVideoTaken(@NonNull VideoResult result) {
// A Video was taken!
}
@Override
public void onVideoRecordingStart() {
// Notifies that the actual video recording has started.
// Can be used to show some UI indicator for video recording or counting time.
}
@Override
public void onVideoRecordingEnd() {
// Notifies that the actual video recording has ended.
// Can be used to remove UI indicators added in onVideoRecordingStart.
}
})
```
### Related APIs
|Method|Description|

@ -12,8 +12,11 @@ New versions are released through GitHub, so the reference page is the [GitHub R
- New: support for watermarks and animated overlays ([docs](../docs/watermarks-and-overlays.html)), thanks to [@RAN3000][RAN3000] ([#502][502], [#421][421])
- New: added `onVideoRecordingStart()` to be notified when video recording starts, thanks to [@agrawalsuneet][agrawalsuneet] ([#498][498])
- New: added `onVideoRecordingEnd()` to be notified when video recording ends ([#506][506])
- New: added `Audio.MONO` and `Audio.STEREO` to control the channel count for videos and video snapshots ([#506][506])
- New: added `cameraUseDeviceOrientation` to choose whether picture and video outputs should consider the device orientation or not ([#497][497])
- Improvement: improved Camera2 stability and various bugs fixed (e.g. [#501][501])
- Improvement: improved video snapshots speed, quality and stability ([#506][506])
### v2.0.0-beta06
@ -27,7 +30,7 @@ New versions are released through GitHub, so the reference page is the [GitHub R
If you were using `focus`, just switch to `autoFocus`.
If you were using `focusWithMarker`, you can [add back the old marker](../docs/more-features.html#cameraautofocusmarker).
If you were using `focusWithMarker`, you can [add back the old marker](../docs/controls.html#cameraautofocusmarker).
### v2.0.0-beta05
@ -79,3 +82,4 @@ This is the first beta release. For changes with respect to v1, please take a lo
[498]: https://github.com/natario1/CameraView/pull/498
[501]: https://github.com/natario1/CameraView/pull/501
[502]: https://github.com/natario1/CameraView/pull/502
[506]: https://github.com/natario1/CameraView/pull/506

@ -25,7 +25,7 @@ or `CameraOptions.supports(Control)` to see if it is supported.
app:cameraFlash="off|on|auto|torch"
app:cameraWhiteBalance="auto|incandescent|fluorescent|daylight|cloudy"
app:cameraHdr="off|on"
app:cameraAudio="on|off"
app:cameraAudio="on|off|mono|stereo"
app:cameraAudioBitRate="0"
app:cameraVideoCodec="deviceDefault|h263|h264"
app:cameraVideoMaxSize="0"
@ -96,7 +96,9 @@ Defaults to `ON`.
```java
cameraView.setAudio(Audio.OFF);
cameraView.setAudio(Audio.ON);
cameraView.setAudio(Audio.ON); // on but depends on video config
cameraView.setAudio(Audio.MONO); // force mono
cameraView.setAudio(Audio.STEREO); // force stereo
```
##### cameraAudioBitRate
@ -141,7 +143,7 @@ cameraView.setVideoBitRate(0);
cameraView.setVideoBitRate(4000000);
```
### Manual Focus
### Auto Focus
There are many ways to focus a CameraView engine:
@ -173,6 +175,44 @@ cameraView.addCameraListener(new CameraListener() {
Auto focus is not guaranteed to be supported: check the `CameraOptions` to be sure.
```xml
<com.otaliastudios.cameraview.CameraView
app:cameraAutoFocusMarker="@string/cameraview_default_autofocus_marker"
app:cameraAutoFocusResetDelay="3000"/>
```
##### cameraAutoFocusMarker
Lets you set a marker for drawing on screen in response to auto focus events.
In XML, you should pass the qualified class name of your marker.
```java
cameraView.setAutoFocusMarker(null);
cameraView.setAutoFocusMarker(marker);
```
We offer a default marker (similar to the old `focusWithMarker` attribute in v1),
which you can set in XML using the `@string/cameraview_default_autofocus_marker` resource,
or programmatically:
```java
cameraView.setAutoFocusMarker(new DefaultAutoFocusMarker());
```
##### cameraAutoFocusResetDelay
Lets you control how an auto-focus operation is reset after completed.
Setting a value <= 0 or == Long.MAX_VALUE will not reset the auto-focus.
This is useful for low end devices that have slow auto-focus capabilities.
Defaults to 3 seconds.
```java
cameraView.setCameraAutoFocusResetDelay(1000); // 1 second
cameraView.setCameraAutoFocusResetDelay(0); // NO reset
cameraView.setCameraAutoFocusResetDelay(-1); // NO reset
cameraView.setCameraAutoFocusResetDelay(Long.MAX_VALUE); // NO reset
```
### Zoom
There are two ways to control the zoom value:

@ -58,38 +58,6 @@ cameraView.setGridColor(Color.WHITE);
cameraView.setGridColor(Color.BLACK);
```
##### cameraAutoFocusMarker
Lets you set a marker for drawing on screen in response to auto focus events.
In XML, you should pass the qualified class name of your marker.
```java
cameraView.setAutoFocusMarker(null);
cameraView.setAutoFocusMarker(marker);
```
We offer a default marker (similar to the old `focusWithMarker` attribute in v1),
which you can set in XML using the `@string/cameraview_default_autofocus_marker` resource,
or programmatically:
```java
cameraView.setAutoFocusMarker(new DefaultAutoFocusMarker());
```
##### cameraAutoFocusResetDelay
Lets you control how an auto-focus operation is reset after completed.
Setting a value <= 0 or == Long.MAX_VALUE will not reset the auto-focus.
This is useful for low end devices that have slow auto-focus capabilities.
Defaults to 3 seconds.
```java
cameraView.setCameraAutoFocusResetDelay(1000); // 1 second
cameraView.setCameraAutoFocusResetDelay(0); // NO reset
cameraView.setCameraAutoFocusResetDelay(-1); // NO reset
cameraView.setCameraAutoFocusResetDelay(Long.MAX_VALUE); // NO reset
```
##### cameraUseDeviceOrientation
Controls whether we should consider the device orientation for picture and video outputs.

Loading…
Cancel
Save