How to Use Hugging Face API
Hugging Face is one of the best platforms for machine learning, and artificial intelligence (AI) models. Using the Hugging Face API, we can easily interact with various pre-trained models for tasks like text generation, translation, sentiment analysis, etc.
In this article, we are going to discuss how to use the Hugging Face API with simple steps and examples.
Table of Content
Why Use Hugging Face API?
- Ease of use: We don't have to spend hours building models from scratch. The API streamlines the process, making it accessible even to beginners.
- Pre-trained Models: Hugging Face offers a treasure trove of pre-trained models, which means we can pick one that suits what we need and start using it right away.
- Time-Saving: We can integrate AI into our projects with the Hugging Face API and we can focus on what really matters that is developing our application rather than getting bogged down in the intricacies of machine learning.
Setting Up the Hugging Face API
We have to first setup the Hugging Face API before using it. Now let's see how we can
Step 1: Create a Hugging Face Account and Get API Token
Create an account on Hugging Face. After creating an account, go to your account settings and get your HuggingFace API token.
Step 2: Install the Hugging Face Hub Library
The Hugging Face Hub library helps us in interacting with the API. This library provides an easy-to-use interface for interacting with the Hugging Face models and making requests. We can easily install it using pip:
pip install huggingface_hubNow that we have our API token and have installed the library, now we can start making requests to the API.
Step 3: Import the Necessary Library and Authenticate
In our Python environment, we start by importing the InferenceClient class from the Hugging Face Hub. Then, we can authenticate using our API token:
from huggingface_hub import InferenceClient
# Replace 'your_hf_api_token_here' with your actual API token
client = InferenceClient(token="your_hf_api_token_here")
Once authenticated, we are ready to use the API.
Using HuggingFace API for NLP Tasks
Now, we are going to see different Natural Language Processing (NLP) tasks using the Hugging Face API, focusing on Text Generation, Named Entity Recognition (NER), and Question Answering. These tasks demonstrate the capabilities of advanced models like GPT-2 and BERT, which can significantly enhance our understanding and interaction with text.
Text Generation with GPT-2
Using GPT-2 for text generation is straightforward with Hugging Face's API. By sending an input prompt, we can generate coherent, engaging text for various applications.
Here’s how to get started:
- Setup: Import the
requestslibrary and set up API details with the GPT-2 model URL and authorization headers. - Query Function: Define a function that sends a POST request to the API with a prompt, receiving generated text in JSON format.
import requests
API_URL = "https://api-inference.huggingface.co/models/gpt2"
headers = {"Authorization": "Bearer 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"} # Replace with your valid token
# Define the query function to send a request to the Hugging Face API
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
# Define the text generation function using the query function
def generate_text(prompt):
response = query({"inputs": prompt})
# Print the raw response to inspect its structure
print(response)
# Safely access the response, handling errors
if isinstance(response, list) and len(response) > 0:
return response[0].get('generated_text', "No text generated.")
else:
return "Error: Unable to generate text."
# Provide a text prompt
prompt = "Once upon a time"
print(generate_text(prompt))
Output:
[{'generated_text': 'Once upon a time it seemed like Lightsaber on Coruscant would be a terrible threat. to Spike they decided . . . .and they continued their'}]Named Entity Recognition (NER) with BERT
Named Entity Recognition (NER) identifies entities like names, locations, and organizations in text, a valuable tool for data extraction and information retrieval.
- API Setup: Use the
bert-large-cased-finetuned-conll03-englishmodel and authenticate with a Bearer token. - NER Function: Define a query function that sends text as JSON and processes the API's response.
import time
import requests
API_URL = "https://api-inference.huggingface.co/models/dbmdz/bert-large-cased-finetuned-conll03-english"
headers = {"Authorization": f"Bearer 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"} # Replace YOUR_API_KEY with your actual key
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
def ner(text):
while True:
response = query({"inputs": text})
if 'error' in response and 'loading' in response['error']:
print(response['error'])
time.sleep(10) # Wait for 10 seconds before retrying
else:
return response
text = "Hugging Face Inc. is based in New York City."
print(ner(text))
Output:
'score': 0.9958662986755371, 'word': 'Hugging Face Inc', 'start': 0, 'end': 16}, {'entity_group': 'LOC', 'score': 0.9992396235466003, 'word': 'New York City', 'start': 30, 'end': 43}]Question Answering with RoBERTa
Using RoBERTa for Question Answering enables contextualized responses based on provided information, ideal for chatbots and Q&A applications.
- API Setup: Use
roberta-base-squad2and set up a function that takes a question and context. - Response Handling: The model returns the best answer based on the input context.
API_URL = "https://api-inference.huggingface.co/models/deepset/roberta-base-squad2"
def answer_question(question, context):
response = query({
"inputs": {
"question": question,
"context": context
}
})
return response['answer']
question = "What is Hugging Face?"
context = "Hugging Face is a company based in New York."
print(answer_question(question, context))
Output:
a company based in New YorkUsing HuggingFace APIs for Computer Vision Tasks
For Computer Vision, Hugging Face provides image-processing models that facilitate classification, object detection, and segmentation.
Image classification
Using image classification, we can identify the contents of an image.
Here’s how to set it up:
- API Setup: Use an image classification model with authentication details.
- Classification Function: The function reads the image, encodes it, and sends it to the API, returning label predictions.
import requests
import base64
from PIL import Image, ImageDraw, ImageFont
import io
API_URL = "https://api-inference.huggingface.co/models/facebook/detr-resnet-50"
headers = {"Authorization": f"ENTER YOUR API KEY"} # Replace with your actual API key
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
def detect_objects(image_path):
# Open the image file in binary read mode
with open(image_path, "rb") as f:
image_data = f.read()
# Encode the image data to base64
image_base64 = base64.b64encode(image_data).decode('utf-8')
# Prepare the request payload with the base64 encoded image
response = query({"inputs": image_base64})
# Handle errors in the response
if 'error' in response:
print(f"Error: {response['error']}")
return None
return response
def draw_boxes(image_path, detections, output_path="output_with_boxes.jpg"):
# Open the image
image = Image.open(image_path)
draw = ImageDraw.Draw(image)
# Define font (adjust font size as necessary)
try:
font = ImageFont.truetype("arial.ttf", 20)
except IOError:
font = ImageFont.load_default()
# Draw each bounding box on the image
for detection in detections:
box = detection['box']
label = detection['label']
score = detection['score']
# Bounding box coordinates
xmin, ymin, xmax, ymax = box['xmin'], box['ymin'], box['xmax'], box['ymax']
# Draw the bounding box
draw.rectangle([(xmin, ymin), (xmax, ymax)], outline="red", width=3)
# Add label and score above the bounding box
text = f"{label} ({score:.2f})"
# Get text size using textbbox (works in Pillow 8.0+)
text_bbox = draw.textbbox((xmin, ymin), text, font=font)
text_width = text_bbox[2] - text_bbox[0]
text_height = text_bbox[3] - text_bbox[1]
# Draw background rectangle for text
draw.rectangle([(xmin, ymin - text_height), (xmin + text_width, ymin)], fill="red")
# Draw the label text
draw.text((xmin, ymin - text_height), text, fill="white", font=font)
# Save the image with bounding boxes
image.save(output_path)
print(f"Image saved with bounding boxes as {output_path}")
# Path to your image file
image_path = "/content/animal images.jpg"
result = detect_objects(image_path)
# Draw boxes and save the result if detection is successful
if result is not None:
draw_boxes(image_path, result)
Output:
Image saved with bounding boxes as output_with_boxes.jpg
Object Detection with Facebook DETR
Object detection allows precise identification and localization of objects within images.
- API Setup: Use the DETR model for detecting objects in photos.
- Object Detection Function: Reads, encodes, and sends the image to the API, returning coordinates for detected objects.
import requests
import base64
from PIL import Image, ImageDraw
import io
API_URL = "https://api-inference.huggingface.co/models/facebook/detr-resnet-50"
headers = {"Authorization": f"Bearer API key"} # Replace with your actual API key
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
def detect_objects(image_path):
# Open the image file in binary read mode
with open(image_path, "rb") as f:
image_data = f.read()
# Encode the image data to base64
image_base64 = base64.b64encode(image_data).decode('utf-8')
# Prepare the request payload with the base64 encoded image
response = query({"inputs": image_base64})
# Handle errors in the response
if 'error' in response:
print(f"Error: {response['error']}")
return None
return response
def draw_boxes(image_path, detections, output_path="output_with_boxes.jpg"):
# Open the image
image = Image.open(image_path)
draw = ImageDraw.Draw(image)
# Draw each bounding box on the image
for detection in detections:
box = detection['box']
label = detection['label']
score = detection['score']
# Bounding box coordinates
xmin, ymin, xmax, ymax = box['xmin'], box['ymin'], box['xmax'], box['ymax']
# Draw the bounding box
draw.rectangle([(xmin, ymin), (xmax, ymax)], outline="red", width=3)
# Add label and score
draw.text((xmin, ymin), f"{label} ({score:.2f})", fill="red")
# Save the image with bounding boxes
image.save(output_path)
print(f"Image saved with bounding boxes as {output_path}")
# Path to your image file
image_path = "dog.jpg"
result = detect_objects(image_path)
# Draw boxes and save the result if detection is successful
if result is not None:
draw_boxes(image_path, result)
Output:
Image saved with bounding boxes as output_with_boxes.jpg
Conclusion
We can use the Hugging Face API for machine learning models for performing various tasks. It is a powerful tool . With only just a few lines of code, we can integrate pre-trained models into our applications without worrying about the heavy lifting of training our own model. Whether we are building a chatbot, performing language translation, or generating text, Hugging Face offers a simple and effective solution through its API. You can explore different models in the Hugging Face Model Hub and experiment with the code provided above to start using these models in your projects.