Photo and Video Evidence Capture in Low-Bandwidth Security Guard Apps
A security guard snaps a photo of a broken fence at a remote pipeline facility. The image is 8MB because the phone captured it at full 48-megapixel resolution. The guard's device has a weak 3G signal with 200kbps effective throughput. At that speed, uploading the photo takes over 5 minutes, assuming the connection does not drop. Multiply this by the 10 photos in the incident report, and the upload will not complete before the guard's shift ends.
This is a common failure mode in guard tour apps deployed at sites with limited connectivity. The solution is not to avoid media capture. Photos and videos are the most valuable evidence a guard produces. The solution is to build the capture, compression, storage, and upload pipeline specifically for low-bandwidth environments.
Image Compression: Format and Quality Selection
The first decision point is image format. Modern phones support three practical options for compressed photos: JPEG, WebP, and HEIC (HEIF).
JPEG
JPEG is universally supported across every device, browser, and backend system. At quality level 80 (on a 0-100 scale), a 12-megapixel photo is typically 1.5 to 3MB. Dropping to quality 60 reduces this to 800KB to 1.5MB with noticeable but acceptable quality loss for evidence purposes. The critical detail in a security photo is usually a license plate, a face, or damage to property. At quality 60, these details remain legible in most lighting conditions.
WebP
WebP achieves 25-35% smaller file sizes than JPEG at equivalent visual quality. A 12-megapixel photo at WebP quality 75 is roughly equivalent in visual quality to JPEG quality 80, but 30% smaller. Android has native WebP encoding support through Bitmap.compress(CompressFormat.WEBP_LOSSY, quality, outputStream). iOS added WebP encoding support in iOS 14 through the ImageIO framework. The downside is that some older backend systems and web browsers do not render WebP, so you may need to store the original format as a fallback.
HEIC
HEIC is the default capture format on iPhones since iOS 11 and is supported on Android 10+ with hardware encoding on many chipsets. HEIC achieves roughly 50% file size reduction compared to JPEG at equivalent quality. A 12-megapixel HEIC photo at default quality is typically 1 to 2MB. The problem is backend compatibility. Not all server-side image processing libraries support HEIC natively. If your backend runs ImageMagick 7+ or uses libheif, HEIC works. Otherwise, you need a transcoding step on upload or on the server.
Recommended Strategy
For maximum compatibility with minimum file size, capture photos at the camera's native format (HEIC on iOS, JPEG or HEIC on Android depending on device) and store the original locally. Before upload, transcode to WebP at quality 70 if the device supports it, or JPEG at quality 65 as a fallback. This gives you files in the 500KB to 1.2MB range for a 12-megapixel photo, which is manageable even on slow connections.
Additionally, downsample the resolution before compression. A 48-megapixel capture contains far more detail than needed for incident evidence. Resizing to 3000x2000 pixels (6 megapixels) before compression preserves all forensically relevant detail while reducing the pre-compression data by 75%. On Android, use Bitmap.createScaledBitmap(). On iOS, use CGImage with a CGContext at the target dimensions.
Thumbnail-First Sync
When the operations center reviews an incident report, they need to see what happened quickly. They do not need the full-resolution photo immediately. A thumbnail-first sync strategy uploads a small preview image before the full file, giving supervisors immediate visibility into incidents even while the full media is still uploading.
At capture time, generate a 320x240 pixel JPEG thumbnail at quality 50. This thumbnail is typically 15 to 30KB, small enough to upload even on a 2G connection in under a second. Store the thumbnail alongside the full image on disk, linked by the same media UUID.
The sync worker uploads media in two phases:
- Upload the incident report text, metadata, and all thumbnails. This payload is typically under 100KB total, even for a report with 10 photos.
- Upload full-resolution images one at a time, in the background, with resume support.
The operations dashboard displays thumbnails immediately and progressively replaces them with full images as they arrive. A subtle loading indicator on each thumbnail tells the reviewer whether the full image is available yet.
Video Chunked Upload with Resume
Video is the most bandwidth-intensive media type. A 30-second video at 720p with H.264 encoding is approximately 15 to 25MB. On a connection that drops every few minutes, uploading this as a single HTTP request will fail repeatedly.
Chunked upload with resume support solves this. The approach follows the tus (tus.io) resumable upload protocol or a similar custom implementation:
- The client creates an upload session with the server, sending the total file size, content type, and associated metadata (incident report ID, media UUID, SHA-256 hash of the complete file).
- The server returns a session URL and the chunk size to use (typically 256KB to 1MB, depending on expected bandwidth).
- The client reads the file in chunks and uploads each chunk with a
Content-Rangeheader indicating its byte offset. - If a chunk upload fails (timeout, connection drop), the client queries the server for the last successfully received byte offset and resumes from that point.
- After the final chunk, the server reassembles the file and verifies the SHA-256 hash against the value provided at session creation.
On Android, implement the chunked upload worker using WorkManager with a NetworkType.CONNECTED constraint. WorkManager handles retries, backoff, and respects Doze mode. On iOS, use URLSession with a background configuration (URLSessionConfiguration.background(withIdentifier:)). Background URL sessions continue uploading even when the app is suspended, which is critical for large video files.
Video Compression Before Upload
Before uploading, compress the video to a bandwidth-appropriate quality. Guards do not need to capture at 4K. Configure the camera session to record at 720p (1280x720) with H.264 encoding at a bitrate of 2 Mbps. This produces roughly 15MB per minute of footage, a reasonable balance between visual quality and file size for evidence purposes.
On Android, use MediaCodec or the MediaRecorder API with explicit bitrate settings. On iOS, configure AVCaptureSession with AVCaptureSessionPreset1280x720 and set the video data output's compression properties. If the guard captures at a higher resolution (because the camera defaults to 1080p or 4K), transcode the file before upload using AVAssetExportSession on iOS or MediaCodec on Android.
Background Upload with WorkManager and BGTaskScheduler
Media uploads must happen in the background without the guard keeping the app open. The guard captures evidence, submits the report, and returns to patrol. The upload pipeline runs independently.
Android WorkManager
Create a CoroutineWorker (or Worker for Java) that processes the media upload queue. Set constraints to require network connectivity. For large files, use a long-running worker with setForeground() to show a persistent notification during upload, which prevents the system from killing the worker. Chain workers so that thumbnail uploads run first, followed by full-resolution image uploads, followed by video uploads.
WorkManager persists the work queue across app restarts and device reboots. If the device reboots mid-upload, WorkManager re-enqueues the work and the chunked upload resumes from the last successful byte offset.
iOS BGTaskScheduler and Background URLSession
On iOS, background URLSession is the primary mechanism for large uploads. The system manages the upload even when the app is not running. Register a BGProcessingTask for batch processing of the upload queue (selecting which files to upload next, generating thumbnails for new captures). The actual file transfer uses the background URL session, which the OS handles natively.
One iOS-specific constraint: background URL sessions require the upload body to be a file on disk, not an in-memory data object. Structure your upload to write the request body (including multipart boundaries if applicable) to a temporary file, then pass that file URL to the upload task.
On-Device Storage Management
A guard who captures 50 photos and 5 videos during a shift generates 200 to 500MB of media. Over a week of shifts without Wi-Fi sync, a 64GB phone can accumulate several gigabytes of pending media. The app must manage local storage proactively.
- After a media file is successfully uploaded and the server confirms receipt (with a matching hash), delete the local file. Keep the thumbnail and metadata for offline viewing.
- Monitor available disk space using
StatFson Android orFileManager.attributesOfFileSystemon iOS. If available space drops below 500MB, warn the guard and prioritize syncing media over capturing new files. - Implement a retention policy: media files pending upload for more than 14 days without a successful sync are flagged for supervisor review. Never auto-delete unsynced evidence.
- Store media in the app's internal storage directory, not external storage. This prevents other apps from accessing evidence files and ensures they are cleaned up on app uninstall.
Metadata Preservation and Chain of Evidence
Every media file must carry metadata that establishes when, where, and by whom it was captured. This metadata must survive compression, upload, and server-side processing without alteration.
Embedded Metadata
For photos, write GPS coordinates, timestamps (both device clock and GPS-derived), device ID, and guard ID into the EXIF data before saving. On Android, use ExifInterface to write these tags. On iOS, use the CGImageDestination API with a properties dictionary. Write metadata immediately after capture, before any compression or resizing, so the original file has complete metadata. After resizing and compressing for upload, verify that the metadata carried through. Some compression libraries strip EXIF data by default; configure them to preserve it.
Sidecar Metadata for Video
Video files do not have a universal metadata standard equivalent to EXIF. Store video metadata in a JSON sidecar file that travels with the video through the upload pipeline. The sidecar includes: GPS coordinates at capture start, GPS coordinates at capture end (if the guard moved during recording), device clock timestamps, GPS timestamps, device ID, guard ID, video duration, resolution, codec, and the SHA-256 hash of the raw video file.
Hash-Based Integrity Verification
Compute a SHA-256 hash of each media file immediately after capture, before any processing. Store this hash in the incident report record and in the sidecar metadata. When the file is uploaded, the server independently computes the hash and compares it. A mismatch indicates the file was corrupted in transit or altered on device. For legal and insurance purposes, this hash chain proves that the file on the server is identical to the file captured by the guard's device.
After compression for upload, compute a second hash of the compressed file. The server receives both: the original capture hash (for the archival record) and the compressed file hash (for transfer verification). If the server needs the original uncompressed file for forensic analysis, it can request it separately.
Adaptive Quality Based on Connection
Rather than using a fixed compression level, adapt image quality to the current network conditions. Measure effective throughput from the last few successful uploads. If throughput is below 100kbps, compress more aggressively (JPEG quality 50, 2-megapixel resolution). If throughput is above 1 Mbps, use higher quality (WebP quality 75, 6-megapixel resolution). The guard should never have to think about this. The app makes the tradeoff automatically and logs the quality level used so the operations center knows what to expect.
DEVSFLOW Guarding builds media capture and upload pipelines optimized for the low-bandwidth, high-stakes environments security guards work in. Talk to us about your evidence capture workflow.