iOS 카메라를 처음 사용했습니다. 사진 (스틸 이미지) 만 찍을 수있는 간단한 앱을 만들려고합니다.AVCam 미리보기 이미지 미리보기
https://developer.apple.com/library/ios/samplecode/AVCam/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010112-Intro-DontLinkElementID_2
내가 그림에서와 같이, 사용자 정의 사진 크기를 만들려면 : 나는 WWDC에서 코드를 사용하고
여기에 이미지 설명
를 입력하지만 결과는 다음과 같습니다 enter image description here
어떻게 정사각형 크기로 고정시킬 수 있습니까?
고마워요!
편집 : 결과 그림을 첨부하고 있습니다. enter image description here 어떻게 해결할 수 있습니까?
Edite 2 :
CMPCameraViewController :
- (void)viewDidLoad
{
[super viewDidLoad];
// Disable UI. The UI is enabled if and only if the session starts running.
self.stillButton.enabled = NO;
// Create the AVCaptureSession.
self.session = [[AVCaptureSession alloc] init];
// Setup the preview view.
self.previewView.session = self.session;
// Communicate with the session and other session objects on this queue.
self.sessionQueue = dispatch_queue_create("session queue", DISPATCH_QUEUE_SERIAL);
self.setupResult = AVCamSetupResultSuccess;
// Setup the capture session.
// In general it is not safe to mutate an AVCaptureSession or any of its inputs, outputs, or connections from multiple threads at the same time.
// Why not do all of this on the main queue?
// Because -[AVCaptureSession startRunning] is a blocking call which can take a long time. We dispatch session setup to the sessionQueue
// so that the main queue isn't blocked, which keeps the UI responsive.
dispatch_async(self.sessionQueue, ^{
if (self.setupResult != AVCamSetupResultSuccess) {
return;
}
self.backgroundRecordingID = UIBackgroundTaskInvalid;
NSError *error = nil;
AVCaptureDevice *videoDevice = [CMPCameraViewController deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionBack];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (! videoDeviceInput) {
NSLog(@"Could not create video device input: %@", error);
}
[self.session beginConfiguration];
if ([self.session canAddInput:videoDeviceInput]) {
[self.session addInput:videoDeviceInput];
self.videoDeviceInput = videoDeviceInput;
dispatch_async(dispatch_get_main_queue(), ^{
// Why are we dispatching this to the main queue?
// Because AVCaptureVideoPreviewLayer is the backing layer for AAPLPreviewView and UIView
// can only be manipulated on the main thread.
// Note: As an exception to the above rule, it is not necessary to serialize video orientation changes
// on the AVCaptureVideoPreviewLayer’s connection with other session manipulation.
// Use the status bar orientation as the initial video orientation. Subsequent orientation changes are handled by
// -[viewWillTransitionToSize:withTransitionCoordinator:].
UIInterfaceOrientation statusBarOrientation = [UIApplication sharedApplication].statusBarOrientation;
AVCaptureVideoOrientation initialVideoOrientation = AVCaptureVideoOrientationPortrait;
if (statusBarOrientation != UIInterfaceOrientationUnknown) {
initialVideoOrientation = (AVCaptureVideoOrientation)statusBarOrientation;
}
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.previewView.layer;
previewLayer.connection.videoOrientation = initialVideoOrientation;
previewLayer.bounds = _previewView.frame;
//previewLayer.connection.videoOrientation = UIInterfaceOrientationLandscapeLeft;
});
}
else {
NSLog(@"Could not add video device input to the session");
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error];
if (! audioDeviceInput) {
NSLog(@"Could not create audio device input: %@", error);
}
if ([self.session canAddInput:audioDeviceInput]) {
[self.session addInput:audioDeviceInput];
}
else {
NSLog(@"Could not add audio device input to the session");
}
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ([self.session canAddOutput:movieFileOutput]) {
[self.session addOutput:movieFileOutput];
AVCaptureConnection *connection = [movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
if (connection.isVideoStabilizationSupported) {
connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeAuto;
}
self.movieFileOutput = movieFileOutput;
}
else {
NSLog(@"Could not add movie file output to the session");
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
if ([self.session canAddOutput:stillImageOutput]) {
stillImageOutput.outputSettings = @{AVVideoCodecKey : AVVideoCodecJPEG};
[self.session addOutput:stillImageOutput];
self.stillImageOutput = stillImageOutput;
}
else {
NSLog(@"Could not add still image output to the session");
self.setupResult = AVCamSetupResultSessionConfigurationFailed;
}
[self.session commitConfiguration];
});
}
CMPPreviewView :
+ (Class)layerClass
{
return [AVCaptureVideoPreviewLayer class];
}
- (AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
return previewLayer.session;
}
- (void)setSession:(AVCaptureSession *)session
{
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer *)self.layer;
previewLayer.session = session;
((AVPlayerLayer *)[self layer]).videoGravity = AVLayerVideoGravityResize;
}
감사합니다. 시도해 보겠습니다. 그리고 나는 또한 필요하기 때문에 자르기에 대해 읽을 것입니다. –
제 편집을 볼 수 있습니까? 고맙습니다! –
안녕하세요. 나는 원래의 질문을 이해하지 못했을 가능성이 있다고 생각합니다. 비디오 ** 미리보기 ** 레이어 (사진을 찍기 전에 보는 것)가 사각형의 크기를 채우려고합니까? 또는 ** 캡처 한 ** 사진을 그 사각형의 크기로 잘라내려고하십니까?나는 비디오 미리보기 레이어에 대해 이야기하고 있다고 가정했는데 비디오 미리보기 레이어는 실제로 비디오 중력을 조정해야합니다. 현재 빌드를 달성하려고 시도한 코드도 포함 시켜서 시도한 것을 볼 수 있습니다. –