Examples

Scrape Hackernews using MultiOn API ✨

This example goes over how to use multion API to scrape top posts from Hackernews.

Install Required Packages

Ensure you have the necessary packages installed by running the following commands in your terminal:

$pip install multion

Import Required Libraries

In your Python script, import the required libraries for the example:

1import multion

Initialize Multion Client

1from multion.client import MultiOn
2
3multion = MultiOn(api_key="MULTION_API_KEY")

Autonomous mode: Browse

1response = multion.browse(
2 cmd="find the top post on hackernews",
3 url="https://news.ycombinator.com/"
4)
5print(response.message)

Step mode: Create a New Session

Create a new session to initiate the query:

1create_session_response = multion.sessions.create(url="https://news.ycombinator.com/")
2print(create_session_response.message)
3session_id = create_session_response.session_id
4session_id

Update Session

If needed, update the session with additional information:

1while response.status == 'CONTINUE':
2 response = multion.sessions.step(
3 session_id = session_id,
4 cmd="find the top post on hackernews",
5 include_screenshot=True
6 )
7
8if response.status == 'DONE':
9 print('task completed')
10print(response.message)

Capture Screenshot

Capture a screenshot of hackernews:

1get_screenshot = multion.sessions.screenshot(session_id=session_id)
2print("screenshot of session: ", get_screenshot.screenshot)

Close Session

Finally, close the session when done:

1close_session_response = multion.sessions.close(session_id=session_id)
2print("close_session_response: ", close_session_response)

Thank you for exploring this MultiOn API example. Should you have any questions, feel free to reach out on our Discord channel or directly to our team. Happy building! 😊