Forefront vs.open-source

At Forefront, we strongly support open-source projects and the utility they provide to developers around the world. In the case of ML deployment, there are many improvements that can be brought to current open-source tools, such as automated containerization, one-click version control, managed resource allocation, and many more. This is our mission at Forefront.

No credit card required.
Model versioning
CI/CD
Endpoint deployment
Endpoint monitoring

How to go from a trained model to inferencing
1. Create account
2. Upload trained model file ***
3. Click deploy ***
4. Copy API endpoint
5. Start inferencing

*** Repeat every deployment

Reference docs

Model versioning
CI/CD
Endpoint deployment
Endpoint monitoring

How to go from a trained model to inferencing
1. Define an API
2. Create a Dockerfile ***
3. Build an image ***
4. Login to ECR
5. Create a repository
6. Tag the image ***
7. Push the image ***
8. Configure a YAML file for Cortex deployment
9. Create a deployment ***
10.Get API endpoint
11. Start inferencing

*** Repeat every deployment

Reference docs

Model versioning
CI/CD
Endpoint deployment
Endpoint monitoring

How to go from a trained model to inferencing
1. Setup Kubernetes cluster
2. Install KubeFlow and all of its dependencies
3. Upload your model to s3 ***
4. Make YAML file for the uploaded model ***
5. Deploy Kubernetes service with kubectl ***
6. (optional) Create custom service to limit external traffic without making your model accessible to anyone on the internet
6. Check status with kubectl and copy link once live ***
7. Start inferencing

*** Repeat every deployment

Reference docs

Model versioning
CI/CD
Endpoint deployment
Endpoint monitoring

How to go from a trained model to inferencing
1. Install BentoML and Docker
2. Write your API logic in Python ***
3. Save prediction service ***
4. Containerize service ***
5. Deploy on KubeFlow (check KubeFlow steps) ***
6. Start inferencing

*** Repeat every deployment

Reference docs

Model versioning
CI/CD
Endpoint deployment
Endpoint monitoring

How to go from a trained model to inferencing
1. Install MLFlow and Docker
2. Give MLFlow AWS permissions and roles
3. Write your API logic in Python ***
4. Build and push MLFlow container to ECR***
5. Deploy model to SageMaker ***
6. (optional) Add an authentication service to prevent anyone on the internet from using your model
7. Get SageMaker endpoint ***
8. Start inferencing

*** Repeat every deployment

Reference docs

Ready to try Forefront?

Save time, frustration, and get more ML work done. Try Forefront today.

try beta