Awesome
<div align="center">LinkedIn API for Python
Search profiles, send messages, find jobs and more in Python. No official API access required.
<p align="center"> <a href="https://linkedin-api.readthedocs.io">Documentation</a> · <a href="#quick-start">Quick Start</a> · <a href="#how-it-works">How it works</a> </p> </div> <br> <h3 align="center">Sponsors</h3> <p align="center"> <a href="https://bit.ly/4fUyE9J" target="_blank"> <img width="450px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/scrapin-banner.png" alt="Scrapin"> </a> </p> <p align="center" dir="auto" > <a href="https://bit.ly/3AFPGZd" target="_blank"> <img height="45px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/proapis.png" alt="iScraper by ProAPIs"> </a> <a href="https://bit.ly/3SWnB63" target="_blank"> <img height="45px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/prospeo.png" alt="Prospeo"> </a> <a href="https://bit.ly/3SRximo" target="_blank"> <img height="45px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/proxycurl.png" alt="proxycurl"> </a> <a href="https://bit.ly/3Mbksvd" target="_blank"> <img height="45px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/lix.png" alt="Lix"> </a> <a href="https://bit.ly/3WOIMrX" target="_blank"> <img height="70px" src="https://raw.githubusercontent.com/tomquirk/linkedin-api/main/docs/assets/logos/unipile.png" alt="Unipile"> </a> </p> <p align="center"><a href="https://bit.ly/4cCjbIq" target="_blank">Become a sponsor</a></p> <br>Features
- ✅ No official API access required. Just use a valid LinkedIn user account.
- ✅ Direct HTTP API interface. No Selenium, Pupeteer, or other browser-based scraping methods.
- ✅ Get and search people, companies, jobs, posts
- ✅ Send and retrieve messages
- ✅ Send and accept connection requests
- ✅ Get and react to posts
And more! Read the docs for all API methods.
[!IMPORTANT] This library is not officially supported by LinkedIn. Using this library might violate LinkedIn's Terms of Service. Use it at your own risk.
Installation
[!NOTE] Python >= 3.10 required
pip install linkedin-api
Or, for bleeding edge:
pip install git+https://github.com/tomquirk/linkedin-api.git
Quick Start
[!TIP] See all API methods on the docs.
The following snippet demonstrates a few basic linkedin_api use cases:
from linkedin_api import Linkedin
# Authenticate using any Linkedin user account credentials
api = Linkedin('reedhoffman@linkedin.com', '*******')
# GET a profile
profile = api.get_profile('billy-g')
# GET a profiles contact info
contact_info = api.get_profile_contact_info('billy-g')
# GET 1st degree connections of a given profile
connections = api.get_profile_connections('1234asc12304')
Commercial alternatives
<h3> <a href="https://prospeo.io/api/linkedin-email-finder"> Prospeo </a> </h3>This is a sponsored section
Extract data and find verified emails in real-time with Prospeo LinkedIn Email Finder API.
<details> <summary>Learn more</summary> Submit a LinkedIn profile URL to our API and get:- Profile data extracted in real-time
- Company data of the profile
- Verified work email of the profile
- Exclusive data points (gender, cleaned country code, time zone...)
- One do-it-all request
- Stable API, tested under high load
Try it with 75 profiles. Get your FREE API key now.
</details> <h3> <a href="https://nubela.co/proxycurl/?utm_campaign=influencer%20marketing&utm_source=github&utm_medium=social&utm_term=-&utm_content=tom%20quirk"> Proxycurl </a> </h3>Scrape public LinkedIn profile data at scale with Proxycurl APIs.
<details> <summary>Learn more</summary>- Scraping Public profiles are battle tested in court in HiQ VS LinkedIn case.
- GDPR, CCPA, SOC2 compliant
- High rate limit - 300 requests/minute
- Fast - APIs respond in ~2s
- Fresh data - 88% of data is scraped real-time, other 12% are not older than 29 days
- High accuracy
- Tons of data points returned per profile
Built for developers, by developers.
</details> <h3> <a href="https://www.unipile.com/communication-api/messaging-api/linkedin-api/?utm_campaign=git%20tom%20quirk"> Unipile </a> </h3>Full LinkedIn API: Connect Classic/Sales Navigator/Recruiter, synchronize real-time messaging, enrich data and build outreach sequences…
<details> <summary>Learn more</summary>- Easily connect your users in the cloud with our white-label authentication (captcha solving, in-app validation, OTP, 2FA).
- Real-time webhook for each message received, read status, invitation accepted, and more.
- Data extraction: get profile, get company, get post, extract search results from Classic + Sales Navigator + Recruiter
- Outreach sequences: send invitations, InMail, messages, and comment on posts…
Test all the features with our 7-day free trial.
</details> <h3> <a href="https://bit.ly/4fUyE9J"> ScrapIn </a> </h3>Scrape Any Data from LinkedIn, without limit with ScrapIn API.
<details> <summary>Learn more</summary>- Real time data (no-cache)
- Built for SaaS developers
- GDPR, CCPA, SOC2 compliant
- Interactive API documentation
- A highly stable API, backed by over 4 years of experience in data provisioning, with the added reliability of two additional data provider brands owned by the company behind ScrapIn.
Try it for free. Get your API key now
</details> <h3> <a href="https://bit.ly/3AFPGZd"> iScraper by ProAPIs, Inc. </a> </h3>Access high-quality, real-time LinkedIn data at scale with iScraper API, offering unlimited scalability and unmatched accuracy.
<details> <summary>Learn more</summary>- Real-time LinkedIn data scraping with unmatched accuracy
- Hosted datasets with powerful Lucene search access
- Designed for enterprise and corporate-level applications
- Handles millions of scrapes per day, ensuring unlimited scalability
- Trusted by top enterprises for mission-critical data needs
- Interactive API documentation built on OpenAPI 3 specs for seamless integration
- Backed by over 10 years of experience in real-time data provisioning
- Lowest price guarantee for high volume use
Get started here.
</details>End sponsored section
Development
Dependencies
poetry
- A valid Linkedin user account (don't use your personal account, if possible)
Development installation
-
Create a
.env
config file (use.env.example
as a reference) -
Install dependencies using
poetry
:poetry install poetry self add poetry-plugin-dotenv
Run tests
Run all tests:
poetry run pytest
Run unit tests:
poetry run pytest tests/unit
Run E2E tests:
poetry run pytest tests/e2e
Lint
poetry run black --check .
Or to fix:
poetry run black .
Troubleshooting
I keep getting a CHALLENGE
Linkedin will throw you a curve ball in the form of a Challenge URL. We currently don't handle this, and so you're kinda screwed. We think it could be only IP-based (i.e. logging in from different location). Your best chance at resolution is to log out and log back in on your browser.
Known reasons for Challenge include:
- 2FA
- Rate-limit - "It looks like you’re visiting a very high number of pages on LinkedIn.". Note - n=1 experiment where this page was hit after ~900 contiguous requests in a single session (within the hour) (these included random delays between each request), as well as a bunch of testing, so who knows the actual limit.
Please add more as you come across them.
Search problems
- Mileage may vary when searching general keywords like "software" using the standard
search
method. They've recently added some smarts around search whereby they group results by people, company, jobs etc. if the query is general enough. Try to use an entity-specific search method (i.e. search_people) where possible.
How it works
This project attempts to provide a simple Python interface for the LinkedIn API.
Do you mean the legit LinkedIn API?
NO! To retrieve structured data, the LinkedIn Website uses a service they call Voyager. Voyager endpoints give us access to pretty much everything we could want from LinkedIn: profiles, companies, connections, messages, etc. - anything that you can see on linkedin.com, we can get from Voyager.
This project aims to provide complete coverage for Voyager.
Deep dive
Voyager endpoints look like this:
https://www.linkedin.com/voyager/api/identity/profileView/tom-quirk
Or, more clearly
___________________________________ _______________________________
| base path | resource |
https://www.linkedin.com/voyager/api /identity/profileView/tom-quirk
They are authenticated with a simple cookie, which we send with every request, along with a bunch of headers.
To get a cookie, we POST a given username and password (of a valid LinkedIn user account) to https://www.linkedin.com/uas/authenticate
.
Find new endpoints
We're looking at the LinkedIn website and we spot some data we want. What now?
The following describes the most reliable method to find relevant endpoints:
-
view source
-
command-f
/search the page for some keyword in the data. This will exist inside of a<code>
tag. -
Scroll down to the next adjacent element which will be another
<code>
tag, probably with anid
that looks something like<code style="display: none" id="datalet-bpr-guid-3900675"> {"request":"/voyager/api/identity/profiles/tom-quirk/profileView","status":200,"body":"bpr-guid-3900675"} </code>
The value of request
is the url! 🤘
You can also use the network
tab in you browsers developer tools, but you will encounter mixed results.
How Clients query Voyager
linkedin.com uses the Rest-li Protocol for querying data. Rest-li is an internal query language/syntax where clients (like linkedin.com) specify what data they want. It's conceptually similar to the GraphQL.
Here's an example of making a request for an organisation's name
and groups
(the Linkedin groups it manages):
/voyager/api/organization/companies?decoration=(name,groups*~(entityUrn,largeLogo,groupName,memberCount,websiteUrl,url))&q=universalName&universalName=linkedin
The "querying" happens in the decoration
parameter, which looks like the following:
(
name,
groups*~(entityUrn,largeLogo,groupName,memberCount,websiteUrl,url)
)
Here, we request an organisation name and a list of groups, where for each group we want largeLogo
, groupName
, and so on.
Different endpoints use different parameters (and perhaps even different syntaxes) to specify these queries. Notice that the above query had a parameter q
whose value was universalName
; the query was then specified with the decoration
parameter.
In contrast, the /search/cluster
endpoint uses q=guided
, and specifies its query with the guided
parameter, whose value is something like
List(v->PEOPLE)
It could be possible to document (and implement a nice interface for) this query language - as we add more endpoints to this project, I'm sure it will become more clear if such a thing would be possible (and if it's worth it).
Release a new version
- Bump
version
inpyproject.toml
poetry build
poetry publish -r test-pypi
poetry publish
- Draft release notes in GitHub.
Disclaimer
This library is not endorsed or supported by LinkedIn. It is an unofficial library intended for educational purposes and personal use only. By using this library, you agree to not hold the author or contributors responsible for any consequences resulting from its usage.