vSphere Content Library supports the ability to create a "3rd Party Content Library" which can be backed by HTTP(s) endpoint. Historically, this required customers to store the content locally on a system which can run the Python script and performs the Content Library indexing and generating the metadata. Next, you would upload the content (which can be several GBs to TBs) including the metadata to Amazon S3 for publishing. As new content is added/removed, the metadata must be re-generated and this was only supported locally, which mean that you needed to re-upload the changes and potentially requiring additional storage space which was not ideal. This new version of the script allows customers to directly upload content to S3 endpoint and this new version of the script can remotely index and generate the Content Library metadata.
The script supports both the "old" local method as well as s3 method:
# Index local 3rd Party Content Library:
python make-vcsp-2018.py -n foo-local -t local -p mylibrary
# Index remote S3 Content Library:
python make-vcsp-2018.py -n foo-remote -t s3 -p mylibrary