In this paper, we explore the different minimal case solutions to the rotational alignment of IMU-camera systems using homography constraints. The assumption that a ground plane is visible in the images can easily be created in many situations. This calibration process is relevant to many smart devices equipped with a camera and an inertial measurement unit (IMU), like micro aerial vehicles (MAVs), smartphones and tablets, and it is a fundamental step for vision and IMU data fusion. Our solutions are novel as they compute the rotational alignment of IMU-camera systems by utilizing a first-order rotation approximation and by solving a polynomial equation system derived from homography constraints. These solutions depend on the calibration case with respect to camera motion (general motion case or pure rotation case) and camera parameters (calibrated camera or partially uncalibrated camera). We then demonstrate that the number of matched points in an image pair can vary from 1.5 to 3. This enables us to calibrate using only one relative movement and provide the exact algebraic solution to the problem. The novel minimal case solutions are useful to reduce the computation time and increase the calibration robustness when using Random Sample Consensus (RANSAC) on the point correspondences between two images. Furthermore, a non-linear parameter optimization over all image pairs is performed. In contrast to the previous calibration methods, our solutions do not require any special hardware, and no problems are experienced with one image pair without special motion. Finally, by evaluating our algorithm on both synthetic and real scene data including data obtained from robots, smartphones and MAVs, we demonstrate that our methods are both efficient and numerically stable for the rotational alignment of IMU-camera systems.