Tá sé úsáideach go deimhin go gcruthófaí script Python chun sonraí Twitter a scríobadh chun léargais a bhailiú, amhail athbhreithnithe úsáideoirí nó díospóireachtaí maidir le topaicí sonracha, ar féidir leo cabhrú go mór le margaíocht agus taighde. Déanann uathoibriú ag baint úsáide as scripteanna den sórt sin an próiseas bailiúcháin a shruthlíniú, rud a fhágann go bhfuil sé tapa agus éifeachtach.
Tá 2 phacáiste ann a chaithfidh tú a shuiteáil sula dtosaíonn tú ag scríobh an chóid iarbhír. Teastaíonn bainisteoir pacáiste uait freisin do Python Packages (PIP) chun na pacáistí seo a shuiteáil. Ar ámharaí an tsaoil, nuair a shuiteáil tú Python ar do mheaisín, tá PIP suiteáilte freisin. Chun na pacáistí seo a shuiteáil, ní gá duit ach an t -ordú thíos a rith i do chomhéadan líne ordaithe (CLI).
pip install selenium-wire selenium undetected-chromedriver
Nuair a bhíonn an tsuiteáil críochnaithe, ní mór duit na pacáistí seo a allmhairiú isteach i do chomhad python mar a thaispeántar thíos.
from seleniumwire import webdriver as wiredriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
from time import sleep
import json
import undetected_chromedriver as uc
import random
Import ssl
Tá sé bunaithe go leithleach go bhfuil sé tábhachtach go n -úsáidfí seachvótálaí le linn scríobadh. Tá Twitter ar cheann de na hardáin sna meáin shóisialta a chuireann le scríobadh sonraí agus a bheith sábháilte agus cosc a chur air, ba chóir duit seachvótálaí a úsáid.
Níl le déanamh agat ach do sheoladh seachfhreastalaí, ainm úsáideora seachfhreastalaí agus pasfhocal a sholáthar agus ba chóir do IP a chumhdach agus a chosaint anois. Cabhraíonn rith brabhsálaí gan dídean, go bunúsach mar an gcéanna le brabhsálaí a reáchtáil gan comhéadan, le dlús a chur leis an bpróiseas scríobtha, agus sin an fáth ar chuir muid an bhratach gan dídean sna roghanna.
# Sonraigh an seoladh seachfhreastalaí le hainm úsáideora agus pasfhocal i liosta seachvótálaithe
proxies = [
"proxy_username:proxy_password@proxy_address:port_number",
]
# feidhm chun seachfhreastalaí randamach a fháil
def get_proxy():
return random.choice(proxies)
# Roghanna Chrome a chur ar bun leis an seachvótálaí agus an fíordheimhniú
chrome_options = Options()
chrome_options.add_argument("--headless")
proxy = get_proxy()
proxy_options = {
"proxy": {
"http": f"http://{proxy}",
"https": f"https://{proxy}",
}
}
Chun sonraí Twitter a scrape go héifeachtach ag baint úsáide as Python, éilíonn an script dintiúir rochtana don chuntas Twitter, lena n -áirítear an t -ainm úsáideora agus an pasfhocal.
Ina theannta sin, ní mór duit eochairfhocal cuardaigh a shonrú. Úsáideann an script an t -ordú https://twitter.com/search?q={search_keyword}&src=typed_query&f=top chun url a thógáil a chuireann ar a gcumas an eochairfhocal seo a chuardach ar Twitter.
Is é atá i gceist leis an gcéad chéim eile ná cás de chromedriver a chruthú, ag ionchorprú sonraí seachfhreastalaí mar rogha. Treoraíonn an thus seo ChromedRiver chun seoladh IP ar leith a úsáid agus an leathanach á luchtú. Tar éis an thus seo, tá an URL cuardaigh luchtaithe leis na cumraíochtaí seo. Nuair a bheidh an leathanach luchtaithe, ní mór duit logáil isteach chun rochtain a fháil ar na torthaí cuardaigh. Ag baint úsáide as Webdriverwait, fíoraíonn an script go bhfuil an leathanach luchtaithe go hiomlán trí sheiceáil go bhfuil an limistéar iontrála ainm úsáideora i láthair. Má theipeann ar an gceantar seo luchtú, moltar an cás crómedriver a fhoirceannadh.
search_keyword = input("What topic on X/Twitter would you like to gather data on?\n").replace(' ', '%20')
constructed_url = f"https://twitter.com/search?q={search_keyword}&src=typed_query&f=top"
# Tabhair d’ainm úsáideora agus do phasfhocal X/Twitter anseo
x_username = ""
x_password = ""
print(f'Opening {constructed_url} in Chrome...')
# Cruthaigh cás Webdriver le tiománaí chrome neamhthuartha
driver = uc.Chrome(options=chrome_options, seleniumwire_options=proxy_options)
driver.get(constructed_url)
try:
element = WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.XPATH, "//div[@class='css-175oi2r r-1mmae3n r-1e084wir-13qz1uu']"))
)
except Exception as err:
print(f'WebDriver Wait Error: Most likely Network TimeOut: Details\n{err}')
driver.quit()
#Sínigh isteach
if element:
username_field = driver.find_element(By.XPATH, "//input[@class='r-30o5oe r-1dz5y72 r-13qz1uu r-1niwhzg r-17gur6a r-1yadl64 r-deolkf r-homxoj r-poiln3 r-7cikom r-1ny4l3l r-t60dpp r-fdjqy7']")
username_field.send_keys(x_username)
username_field..send_keys(Keys.ENTER)
password_field = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//input[@class='r-30o5oe r-1dz5y72 r-13qz1uu r-1niwhzg r-17gur6a r-1yadl64 r-deolkf r-homxoj r-poiln3 r-7cikom r-1ny4l3l r-t60dpp r-fdjqy7']"))
)
password_field.send_keys(x_password)
password_field.send_keys(Keys.ENTER)
print("Sign In Successful...\n")
sleep(10)
Cruthaigh athróg liosta, torthaí, chun na sonraí bailithe go léir a stóráil go córasach i bhformáid na bhfoclóirí. Tar éis seo, bunaigh feidhm darb ainm scrape () chun saibhreas sonraí a bhailiú go córasach do gach tweet, ag cuimsiú sonraí ríthábhachtacha cosúil leis an ainm taispeána, ainm úsáideora, ábhar iar -ábhair, agus méadrachtaí ar nós maith agus imprisean.
Glacadh le cur chuige réamhghníomhach chun aonfhoirmeacht a ráthú i bpíosaí na liostaí. Cinntíonn an fheidhm min () go bhfuil fad gach liosta ag teacht leis na cinn eile. Trí chloí leis an modheolaíocht seo, cinntímid cur chuige sioncronaithe agus struchtúrtha maidir le sonraí Twitter a bhailiú agus a phróiseáil.
Nuair a dhéanaimid na huimhreacha/na méadrachtaí vanity a scríobadh, cuirtear ar ais iad mar teaghráin ní mar uimhreacha. Ansin, ní mór dúinn na teaghráin a thiontú ina n -uimhreacha ag baint úsáide as convert_to_numeric () ionas gur féidir an toradh a eagrú le imprisean.
results = []
# Sciúradh
def scrape():
display_names = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1wbh5a2 r-dnmrzs r-1ny4l3l r-1awozwy r-18u37iz"]/div[1]/div/a/div/div[1]/span/span')
usernames = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1wbh5a2 r-dnmrzs r-1ny4l3l r-1awozwy r-18u37iz"]/div[2]/div/div[1]/a/div/span')
posts = driver.find_elements(By.XPATH,
'//*[@class="css-146c3p1 r-8akbws r-krxsd3 r-dnmrzs r-1udh08x r-bcqeeo r-1ttztb7 r-qvutc0 r-37j5jr r-a023e6 r-rjixqe r-16dba41 r-bnwqim"]/span')
comments = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[1]/button/div/div[2]/span/span/span')
retweets = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[2]/button/div/div[2]/span/span/span')
likes = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[3]/button/div/div[2]/span/span/span')
impressions = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[4]/a/div/div[2]/span/span/span')
min_length = min(len(display_names), len(usernames), len(posts), len(comments), len(retweets), len(likes),
len(impressions))
for each in range(min_length):
results.append({
'Username': usernames[each].text,
'displayName': display_names[each].text,
'Post': posts[each].text.rstrip("Show more"),
'Comments': 0 if comments[each].text == "" else convert_to_numeric(comments[each].text),
'Retweets': 0 if retweets[each].text == "" else convert_to_numeric(retweets[each].text),
'Likes': 0 if likes[each].text == "" else convert_to_numeric(likes[each].text),
'Impressions': 0 if impressions[each].text == "" else convert_to_numeric(impressions[each].text)
})
def reorder_json_by_impressions(json_data):
# Sórtáil an liosta JSON in áit bunaithe ar 'imprisean' in ord íslitheach
json_data.sort(key=lambda x: int(x['Impressions']), reverse=True)
def organize_write_data(data: dict):
output = json.dumps(data, indent=2, ensure_ascii=False).encode("ascii", "ignore").decode("utf-8")
try:
with open("result.json", 'w', encoding='utf-8') as file:
file.write(output)
except Exception as err:
print(f"Error encountered: {err}")
def convert_to_numeric(value):
multipliers = {'K': 10 ** 3, 'M': 10 ** 6, 'B': 10 ** 9}
try:
if value[-1] in multipliers:
return int(float(value[:-1]) * multipliers[value[-1]])
else:
return int(value)
except ValueError:
# Láimhseáil an cás ina dteipeann ar an tiontú
return None
Chun na sonraí a eagrú níos fearr, chruthaíomar feidhm a thógann na torthaí agus a shórtálann na tweets in ord íslitheach ag baint úsáide as líon na n -imprisean a bhailigh gach tweet. Go loighciúil, ba mhaith linn an tweet a fheiceáil leis an uimhir vanity is airde ar dtús os comhair daoine eile.
def reorder_json_by_impressions(json_data):
# Sórtáil an liosta JSON in áit bunaithe ar 'imprisean' in ord íslitheach
json_data.sort(key=lambda x:int(x['Impressions']), reverse=True)
Is é comhad JSON an bealach is fearr chun na sonraí go léir a bhailítear a shamhlú. Tá scríobh chuig comhad JSON díreach cosúil le scríobh chuig aon chomhad eile i Python. Is é an t -aon difríocht ná go dteastaíonn an modúl JSON uainn chun na sonraí a fhormáidiú i gceart sula ndéantar é a scríobh chuig an gcomhad.
Má bhí an cód ar siúl i gceart, ba chóir duit comhad toradh.json a fheiceáil i struchtúr an chomhaid agus ba chóir go mbeadh an toradh mar a thaispeántar sa chuid thíos.
def organize_write_data(data:dict):
output = json.dumps(data, indent=2, ensure_ascii=False).encode("ascii", "ignore").decode("utf-8")
try:
with open("result.json", 'w', encoding='utf-8') as file:
file.write(output)
except Exception as err:
print(f"Error encountered: {err}")
Chun tús a chur le forghníomhú an Chóid, ní mór dúinn ár bhfeidhmeanna a ghlaoch go seicheamhach chun tús a chur le scríobadh sonraí. Cruthaímid tagairt ag baint úsáide as an modúl ActionChains laistigh de seiléiniam chun gníomhartha seiléiniam éagsúla a éascú. Cruthaíonn an modúl seo ríthábhachtach chun scrolla a ionsamhlú ar an leathanach.
Baineann an chéad bhabhta le sonraí a scríobadh ón leathanach atá luchtaithe faoi láthair. Ina dhiaidh sin, cuirtear tús le lúb, ag athrá cúig huaire, agus scrollú síos an leathanach ina dhiaidh sin, agus sos cúig soicind ina dhiaidh sin roimh an gcéad atriall scríobtha eile.
Is féidir le húsáideoirí raon na lúibe a choigeartú, é a mhéadú nó a laghdú chun méid na sonraí a scríobadh a shaincheapadh. Tá sé ríthábhachtach a thabhairt faoi deara, mura bhfuil aon ábhar breise le taispeáint, go ndéanfaidh an script na sonraí céanna a scríobadh go leanúnach, agus mar thoradh air sin beidh iomarcaíocht ann. Chun é seo a chosc, déan an raon lúibe a choigeartú dá réir sin chun taifeadadh sonraí iomarcacha a sheachaint.
actions = ActionChains(driver)
for i in range(5):
actions.send_keys(Keys.END).perform()
sleep(5)
scrape()
reorder_json_by_impressions(results)
organize_write_data(results)
print(f"Scraping Information on {search_keyword} is done.")
driver.quit()
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
from time import sleep
import json
import undetected_chromedriver as uc
import random
import ssl
ssl._create_default_https_context = ssl._create_stdlib_context
search_keyword = input("What topic on X/Twitter would you like to gather data on?\n").replace(' ', '%20')
constructed_url = f"https://twitter.com/search?q={search_keyword}&src=typed_query&f=top"
# Tabhair d’ainm úsáideora agus do phasfhocal X/Twitter anseo
x_username = ""
x_password = ""
print(f'Opening {constructed_url} in Chrome...')
# Sonraigh an seoladh seachfhreastalaí le hainm úsáideora agus pasfhocal i liosta seachvótálaithe
proxies = [
"USERNAME:PASSWORD@IP:PORT",
]
# feidhm chun seachfhreastalaí randamach a fháil
def get_proxy():
return random.choice(proxies)
# Roghanna Chrome a chur ar bun leis an seachvótálaí agus an fíordheimhniú
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--ignore-ssl-errors')
proxy = get_proxy()
proxy_options = {
"proxy": {
"http": f"http://{proxy}",
"https": f"https://{proxy}",
}
}
# Cruthaigh cás Webdriver le tiománaí chrome neamhthuartha
driver = uc.Chrome(options=chrome_options, seleniumwire_options=proxy_options)
driver.get(constructed_url)
try:
element = WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.XPATH, "//div[@class='css-175oi2r r-1mmae3n r-1e084wi r-13qz1uu']"))
)
except Exception as err:
print(f'WebDriver Wait Error: Most likely Network TimeOut: Details\n{err}')
driver.quit()
# Sínigh isteach
if element:
username_field = driver.find_element(By.XPATH,
"//input[@class='r-30o5oe r-1dz5y72 r-13qz1uu r-1niwhzg r-17gur6a r-1yadl64 r-deolkf r-homxoj r-poiln3 r-7cikom r-1ny4l3l r-t60dpp r-fdjqy7']")
username_field.send_keys(x_username)
username_field.send_keys(Keys.ENTER)
password_field = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH,
"//input[@class='r-30o5oe r-1dz5y72 r-13qz1uu r-1niwhzg r-17gur6a r-1yadl64 r-deolkf r-homxoj r-poiln3 r-7cikom r-1ny4l3l r-t60dpp r-fdjqy7']"))
)
password_field.send_keys(x_password)
password_field.send_keys(Keys.ENTER)
print("Sign In Successful...\n")
sleep(10)
results = []
# Sciúradh
def scrape():
display_names = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1wbh5a2 r-dnmrzs r-1ny4l3l r-1awozwy r-18u37iz"]/div[1]/div/a/div/div[1]/span/span')
usernames = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1wbh5a2 r-dnmrzs r-1ny4l3l r-1awozwy r-18u37iz"]/div[2]/div/div[1]/a/div/span')
posts = driver.find_elements(By.XPATH,
'//*[@class="css-146c3p1 r-8akbws r-krxsd3 r-dnmrzs r-1udh08x r-bcqeeo r-1ttztb7 r-qvutc0 r-37j5jr r-a023e6 r-rjixqe r-16dba41 r-bnwqim"]/span')
comments = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[1]/button/div/div[2]/span/span/span')
retweets = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[2]/button/div/div[2]/span/span/span')
likes = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[3]/button/div/div[2]/span/span/span')
impressions = driver.find_elements(By.XPATH,
'//*[@class="css-175oi2r r-1kbdv8c r-18u37iz r-1wtj0ep r-1ye8kvj r-1s2bzr4"]/div[4]/a/div/div[2]/span/span/span')
min_length = min(len(display_names), len(usernames), len(posts), len(comments), len(retweets), len(likes),
len(impressions))
for each in range(min_length):
results.append({
'Username': usernames[each].text,
'displayName': display_names[each].text,
'Post': posts[each].text.rstrip("Show more"),
'Comments': 0 if comments[each].text == "" else convert_to_numeric(comments[each].text),
'Retweets': 0 if retweets[each].text == "" else convert_to_numeric(retweets[each].text),
'Likes': 0 if likes[each].text == "" else convert_to_numeric(likes[each].text),
'Impressions': 0 if impressions[each].text == "" else convert_to_numeric(impressions[each].text)
})
def reorder_json_by_impressions(json_data):
# Sórtáil an liosta JSON in áit bunaithe ar 'imprisean' in ord íslitheach
json_data.sort(key=lambda x: int(x['Impressions']), reverse=True)
def organize_write_data(data: dict):
output = json.dumps(data, indent=2, ensure_ascii=False).encode("ascii", "ignore").decode("utf-8")
try:
with open("result.json", 'w', encoding='utf-8') as file:
file.write(output)
except Exception as err:
print(f"Error encountered: {err}")
def convert_to_numeric(value):
multipliers = {'K': 10 ** 3, 'M': 10 ** 6, 'B': 10 ** 9}
try:
if value[-1] in multipliers:
return int(float(value[:-1]) * multipliers[value[-1]])
else:
return int(value)
except ValueError:
# Láimhseáil an cás ina dteipeann ar an tiontú
return None
actions = ActionChains(driver)
for i in range(5):
actions.send_keys(Keys.END).perform()
sleep(5)
scrape()
reorder_json_by_impressions(results)
organize_write_data(results)
print(f"Scraping Information on {search_keyword} is done.")
driver.quit()
Seo an rud ba chóir don chomhad JSON a dhéanamh tar éis an scríobadh a dhéanamh:
[
{
"Username": "@LindaEvelyn_N",
"displayName": "Linda Evelyn Namulindwa",
"Post": "Still getting used to Ugandan local foods so I had Glovo deliver me a KFC Streetwise Spicy rice meal (2 pcs of chicken & jollof rice at Ugx 18,000)\n\nNot only was it fast but it also accepts all payment methods.\n\n#GlovoDeliversKFC\n#ItsFingerLinkingGood",
"Comments": 105,
"Retweets": 148,
"Likes": 1500,
"Impressions": 66000
},
{
"Username": "@GymCheff",
"displayName": "The Gym Chef",
"Post": "Delicious High Protein KFC Zinger Rice Box!",
"Comments": 1,
"Retweets": 68,
"Likes": 363,
"Impressions": 49000
}
]
Is féidir an treoir atá leagtha amach a úsáid chun sonraí a scríobadh ar thopaicí spéise éagsúla, staidéir a éascú in anailís meon poiblí, rianú treochtaí, monatóireacht, agus bainistíocht cháil. Déanann Python, ina dhiaidh sin, an próiseas bailithe sonraí uathoibríocha a shimpliú lena raon leathan de mhodúil agus de fheidhmeanna tógtha isteach. Tá na huirlisí seo riachtanach chun seachvótálaithe a chumrú, chun scrolla leathanaigh a bhainistiú, agus chun an fhaisnéis bailithe a eagrú go héifeachtach.
Tuairimí: 0