Let’s start off by saying that search is not metedata. Most companies approach the video search problem as: extract_metadata -> elasticsearch -> search
At Vidrovr we believe this is fundamentally the wrong way to think about this problem. But In this walkthrough I will not dive into the details. This being said we do provide our users with metadata-- actually we provide them with loads of it and we don’t sell it piecemeal! You will get everything we have.
To retrieve information about your data you will need your API-KEY for most methods
First things first. We need to find what videos we have processed. To do this we need to simply call get_video_list. The only required parameter is the API-KEY
An example request is:
GET /public/api/v01/get_video_list?api_key=<API-KEY>
HTTP/1.1
Content-Type: multipart/form-data; charset=utf-8; boundary=__X_PAW_BOUNDARY__
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
nd a example response would look like this:
[
{
"failed": false,
"name": "test_webhook_video",
"id_asset": "dcbce7c1dcdd8cda3ef5ff3f726dba23a2600596-6a2e-46c6-9277-317fed72503c",
"creation_date": 1546980743013
}
]
Previous method gives you list of all the videos that we have processed. But you can filter that list by presence or absence of a certain type of metadata. For example, you can get a list of videos with OCR data in them. First thing that you will need is the API-KEY. You will also need to pass a string parameter has_metadata with either true or fale value to select filtering based on presence or absence of metadata respectively. Last parameter needed is a comma seperated string filters specifying the types of metadata to use for filtering. Possible options are ‘ocr’, ‘person’, ‘tag’.
An example request is:
GET /public/api/v01/filter_videos_by?api_key=2d68d9e17625bc233c1db9f8d5b427a0&has_metadata=True&filters=ocr,tag HTTP/1.1
Content-Type: multipart/form-data; charset=utf-8; boundary=__X_PAW_BOUNDARY__
Host: frontend-staging.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
And a example response would look like this:
{
"ocr": [
{
"service_name": "vidrovr_uploader_service",
"service_relative_file_location": "dan4/6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264.mp4",
"finished_all_processing": false,
"id": 127298,
"imported": -1,
"file_location": "/home/ubuntu/uploaded_content/dan4/6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264.mp4",
"id_asset": "6abe00dbf5d50ec534a0eba7496a12d5879bb75c-b56c-4cba-94c4-530814c52264",
"cc_extraction": -1,
"failed": false,
"shot_detection": -1,
"output_message": "queued",
"audio_diarization": -1,
"ocr": -1,
"thumbnail": "thumb.png",
"service_mount_pnt": "/home/ubuntu/uploaded_content/",
"shot_feature_extraction": -1,
"ner_extraction": -1,
"face_feature_extraction": -1,
"owner_user_id": "458461eb5ed8840c8df9b38621d941471dbb51c5-9049-4284-990f-ee12e7e0c43c",
"face_detection": -1,
"creation_date": 1548273035365,
"cc_asr_alignment": -1,
"name": "test_webhook_video",
"in_db": true,
"transcription": -1,
"speaker_identification": -1
}
]
}
Once you have a id of an asset id_asset from the get_video_list request, get_metadata retrieves metadata from a processed object. You will need the returned id parameter that is fed back either from the Restful uploader upload/uploader or the webhooks uploader webhooks/upload_request.
Note: id == id_asset in get_video_list , this may change in the future but for now this is how it is.
An example request is below:
GET /public/api/v01/get_metadata?api_key=<API-KEY>&id=<ID> HTTP/1.1
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
And here is a truncated example response:
{
"other_metadata": [],
"audio_words": [
{
"start": 300.0,
"end": 300.0,
"word": "I"
}
],
"name": "nat_geo_sharks_zone1.mp4",
"tags": [
{
"start": 108.0,
"end": 263.0,
"tags": "Bird Strike"
},
],
"hashtags": [],
"scenes": [
{
"start": 0.0,
"scene_tags": "Fishing Sports",
"end": 108.0
}
],
"id": "8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3",
"on_screen_text": [
{
"ocr_string": "H",
"end": 2122.0,
"h": 11,
"start": 1753.0,
"w": 13,
"y": 320,
"x": 358
}
],
"person_identification": [],
"creation_date": 1500229993885,
"thumbnail": "http://dev.vidrovr.com/public/api/v01/get_video_thumbnail/2d68d9e17625bc233c1db9f8d5b427a0/thumbnail/asset/8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3/0/0"
}
Note: Other metadata will contain information from your custom detectors.
Vidrovr has spent many years working on video search and retrieval technologies. It leverages various types of inference models and probabilistic graphs to intelligently propagate search based on what people are inputing and mapping it to Vidrovr’s internal knowledge graph.
That being said the search method abstracts all the logic away and returns results plus a neighborhood of our knowledge graph. The only required params are the API-KEY and the query
Here is an example query:
GET /public/api/v01/search?api_key=<API-KEY>0&query=west%20coast HTTP/1.1
Host: platform.vidrovr.com
Connection: close
User-Agent: Paw/3.1.8 (Macintosh; OS X/10.14.2) GCDHTTPRequest
with a response that looks like:
{
"ranking": {
"persons_similarity": [
{
"person": "Tony_West",
"word_match_score": 0.2509476309226933,
"similar_people": {
"video_count": null,
"clip_count": null,
"wiki_url": "https://en.wikipedia.org/wiki/Tony_West",
"data_type": "CO_OCCURENCE",
"similar_people": [
{
"score": 0.0,
"name": "Claire Foy",
"rank": 1
}
],
"mysql_id": 3386,
"thumbnail": null
}
}
],
"tag_similarity": [
{
"embedding_word": "town",
"query_word": "west",
"similar_tags": {
"video_count": null,
"clip_count": null,
"embedding_rep": null,
"data_type": "SEMANTIC_GLOVE",
"similar_tags": [
{
"score": 0.8609558763002203,
"name": "town",
"rank": 1
}
],
"tag_category": null
},
"cosine_distance": 0.0006745313173820669
},
{
"embedding_word": "coast",
"query_word": "coast",
"similar_tags": {
"video_count": null,
"clip_count": null,
"embedding_rep": null,
"data_type": "SEMANTIC_GLOVE",
"similar_tags": [
{
"score": 0.6666745620989242,
"name": "coast",
"rank": 1
}
],
"tag_category": null
},
"cosine_distance": 0.0008288436350588459
}
]
},
"results": [
{
"key_tags": [],
"name": "nat_geo_sharks_zone1.mp4",
"tags": [
{
"frame_end": 263.0,
"frame_start": 108.0,
"tag": "Bird Strike"
}
],
"hashtags": [],
"scenes": [
{
"frame_end": 108.0,
"frame_start": 0.0,
"scene": "Fishing Sports"
}
],
"creation_date": 1500229993885,
"on_screen_text": [
{
"frame_end": 2122.0,
"text": "H",
"w": 13,
"x": 358,
"y": 320,
"frame_start": 1753.0,
"h": 11
},
],
"score": 1.747759,
"person_identification": [],
"key_people": [],
"audio_transcript": [
{
"transcript": " cruise business west coast no service the kids call andy SU stop the sharks close to the bus andy is using the 6 camera virtual reality the capture of 360 degree view scientist to great white males reach sexual maturity with a broken toe by 11 and a half by 13 feet links great white teenagers now find the great white mothers if he finds large females ground open check up on those big sharks on the bottom once the camera settles into position indian video technician matt hutchings start to see some of the larger great white hello "
}
],
"key_hashtags": [],
"id": "8366a4497658ab14fd9234874ccbb0a4369ad47e-ff2e-493b-8b3f-60efffb116a3"
}
]
}
The main query results are returned as part of the results array, where as the knowledge graph (which we use to rank results) is returned in ranking. As part of the knowledge graph we return similar people and visual tag similarities to the query
If you are still having difficulties feel free to shoot us a message at support@vidrovr.com