Lab 5 : AVFoundation
Note: You will need an iPhone or iPad with iOS 11 installed and lightning USB cable for this lab.
In today's lab, you'll get some experience with part of AVFoundation by creating a camera for your Snapchat Clone, while also becoming more familiar with handling user permissions and looking up information in Apple Developer Documentation. Since this lab requires a hardware camera, you'll be testing your application on an iOS device (iPhone/iPad).
This assignment builds off hw3 pt2. Instead of using your own Firebase console, we've connected the project to a shared Firebase project, so that you can send snaps to the entire class. We'll be using the same structure as we used in the homework. But we already implemented that for you.
+------+ +--+ Root +---+ | +------+ | | | | | | | +--v--+ +---v---+ |Posts| | Users | +--+--+ +---+---+ | | | | +------v-----+ +-----v------+ | | | | +------+ | +--------+ | | +--------+ | +---------+ |*post*|<-------+*postID*| | | |*userID*+---->|readPosts| +------+ | +--------+ | | +--------+ | +---------+ | ... | | ... | +------------+ +------------+
Begin by downloading the repo, and opening "Snapchat Camera Lab.xcworkspace". Having finished homwork 3, you should already be familiar with the code provided.
For this lab, you will only be editing CameraViewController.swift and it's corresponding View Controller in Storyboard.
Part 1: Connecting your iOS Device
Before writing any code, you'll need to connect your iPhone or iPad to your computer via a USB cable. Once you've done that, tap on the simulators drop down, and select your device name
Try building. You'll see an error pop up saying "Signing for "Snapchat Camera Lab" requires a development team. Select a development team in the project editor." To fix this, you'll need to add a signing account, which you can do using an Apple ID. To set your Team Profile, Open your project file in Xcode, and click "Add Account" in the Team dropdown (see image).
Once you've set your Team as your Apple ID, try running your app again on your device. If everything's working, you'll see a blank gray view with some buttons (that don't work yet). If you still are having an issue, ask one of the TA's for help (you may need to change your deployment target from 11.1 to 11.0 if your phone is not running the lastest iOS version). The first build on your device should take a few minutes to complete, so give Xcode some time to build.
Part 2: Connecting Outlets
Just so you become more familiar with which views are which, we left the outlets and actions in CameraViewController.swift unconnected to storyboard. Go ahead and connect these outlets and actions in CameraViewController.swift to Main.storyboard. Make sure to read the comments above each IBOutlet and IBAction, so that you are sure you are connecting them correctly.
Tip: may find it helpful to open the Document Outline. You can drag from Outlets and Actions in code to UI elements in the Document Outline if you find that easier. If you need to delete any connections you made, tap on your ViewController in the Document Outline or in Storyboard, and open the Connections Inspector to see all of your connections and delete any if necessary.
If you connected all of the outlets AND actions correctly, try taking a picture. You should see an image of squirrel with a neat little leaf hat. Though sending squirrel snaps to the rest of the class is fun, let's add in our own custom camera to send some photo snaps!
Part 3: Getting User Permissions and Capturing from User's Device
To view real time data collected from your device's camera, we'll be using AVFoundation's AVCaptureSession. Notice the "AVFoundation" import at the top of the file.
We've defined the following AVFoundation related instance variables for you already, but you should already be familiar with
AVCapturePhotoOutput from lecture.
// middleman between AVCaptureInput and AVCaptureOutput var captureSession: AVCaptureSession? // view that will let us preview what is being captured from our input var previewLayer : AVCaptureVideoPreviewLayer? // used to capture a single photo from our capture device let photoOutput: AVCapturePhotoOutput? = nil
Part 3.1: Configuring Capture Session - creating a preview layer for displaying video###
Notice the two commented out methods in
viewDidLoad. These are two helper methods we've defined for the lab to help organize our code, but you will need to fill them out. To begin,
- instantiate an
viewDidLoad, and set it equal to
captureSession. We will be using only one session throughout this lab, so we need to keep a reference to it.
- uncomment the call to the helper function
- finish implementing the
createAndLayoutPreviewLayer(fromSession:)function. You can learn about preview layers here AVCaptureVideoPreviewLayer. All you will need to do is create an instance of
AVCaptureVideoPreviewLayer, using our
captureSession, and set it equal to
Once you've finished this, you've now created a view to display video from your camera. However, you won't see any video showing up yet when you run, since we haven't set our
captureSession's input. Let's do that now!
Part 3.2: Configuring Capture Session - setting inputs and outputs
Go back to
viewDidLoad and uncomment the helper method
configureCaptureSession(forDevicePosition:). Inside the
configureCaptureSession(forDevicePosition:) function, you will need to do the following:
- Create an input using a device, and add it to
captureSession. This will allow us to receive data from the device (camera) while the session is running.
- Create and add an (
AVCapturePhotoOutput) output for our session . Use the
photoOutputvariable we defined for you, so that we can access this output outside of this method body.
Note: The lecture slides will help you if your not sure what to do. Feel free to rename
Tip: Once we have a hardware capture device, we need to create a capture device input to feed into our capture session.
Once you've finished, try running your app. You should be able to see video data from your camera in the
previewLayer. Try taking a photo - looks like we still have a problem (you should still see the squirrel at this point). To fix this, we need to capture the photo from our session input, using our session's output, which we'll do in the next step.
Part 4: Taking a photo using AVCapturePhoto
Now, we'll implement taking a photo, and then displaying this image in our imageView (
imageViewOverlay). To do this, we will need to use
AVCapturePhotoCaptureDelegate. Specifically, you will need to:
- Capture a photo using your
photoOutputobject in the
takePhotoButtonWasPressed(_:)IBAction method. Remember, this doesn't save the image anywhere, we will handle this using
AVCapturePhotoCaptureDelegatein the next step.
- Define the
photoOutput(didFinishProcessingPhoto: ...). (We did not add this in for you, you will need to create it and make sure your view controller class conforms to the delegate protocol).
Inside of the
photoOutput(didFinishProcessingPhoto: ...) function, you will need to do the following (in this order):
- Create a UIImage using the
photoparameter. Slide 54 from lecture will help you if your stuck.
toggleUI(isInPreviewMode: true). This is already implemented for you, and it simply updates the UI elements on the screen - (i.e., once the picture is taken, we want to present the send button, hide the camera flip button, etc.)
Once you've finished this step, you should be able to take and send photos!
Part 5 (OPTIONAL BUT ALSO NOT THAT BAD): Supporting front and back camera
Right now, we can only take pictures using the front camera. To add support for toggling between the front camera and the rear facing camera, go back to the function we implemented at the beginning of the lab,
configureCaptureSession(forDevicePosition:). Right now
someConstantWithABadName (which you probably know by now is a reference to our front camera device) uses a discovery session to search for all devices with a built-in camera.
To toggle between the front facing camera and the rear facing camera, you'll need to edit this constant, as well as the method
flipCamera. Some hints:
- You'll need to pass in a different
- You may (and should) reuse the same
captureSessionwhen switching devices/cameras (meaning, you should be calling
flipCamera), but you must remove your old camera input before adding a new one. You can do this by getting the first input from
captureSession.inputs(if one exists) and calling
captureSession.removeInput(_)passing it in as an argument.
If you implemented this correctly, you should be able to toggle back and forth between both of your device's cameras. Nice work!
Once you've finished the lab, you can get checked-off using this form: https://goo.gl/forms/MurNLD7tpac3uZt83. If you weren't able to finish before 8pm, make sure to fill in the keyword question, and be sure to get checked-off next week at the beginning of lab.