Skip to main content
0 votes
0 answers
81 views

I have a simple question for people used to Perplexity, this is what appears in the perplexity documentation: https://docs.perplexity.ai/guides/user-location-filter-guide#examples import requests url ...
Udl David's user avatar
0 votes
0 answers
72 views

I'm working with LDA on a Portuguese news corpus (~800k documents with an average of 28 words each after cleaning the data), and I’m trying to evaluate topic quality using perplexity. When I compute ...
O Basile's user avatar
1 vote
1 answer
401 views

I don't find an efficient way to check if an api key of perplexity is valid, either on python or anything. Indeed, for openai I do : def check_openai_api_key(api_key): openai.api_key = api_key ...
Digicem's user avatar
  • 31
0 votes
1 answer
96 views

In the paper here, it says that perplexity as an automated metric is not reliable for open domain text generation tasks, but it instead uses lm-score, a model based metric to produce perplexity like ...
Sahil Yerawar's user avatar
2 votes
2 answers
2k views

I can get the perplexity of a whole sentence from here: device = "cuda" from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2" ...
Penguin's user avatar
  • 2,651
1 vote
0 answers
319 views

I am currently working on a project of calculating perplexities of various causal LLMs for different languages to estimate their behaviour if there is an input in a form of the language, that ...
Nikita Volkov's user avatar
1 vote
1 answer
557 views

Challenges when calculating perplexity: is my approach reasonable? I am trying to find a pre-trained language model that will work best for my text. The text is pretty specific in its language and ...
Agnes's user avatar
  • 29
1 vote
1 answer
848 views

I'm following Huggingface doc on calculating the perplexity of fixed-length models. I'm trying to verify that the formula works for various strings and I'm getting odd behavior. In particular, they ...
Penguin's user avatar
  • 2,651
0 votes
1 answer
134 views

I am new to mallet. Now I would like to get the perplexity scores for 10-100 topics in my lda model so I run the held-our probability, it gives me the value of -8926490.73103205 for topic=100, which ...
May3514's user avatar
2 votes
1 answer
1k views

Is there a way to calculate the perplexity of BERTopic? I am unable to find any such thing in the BERTopic library and in other places.
Inaam Ilahi's user avatar
1 vote
1 answer
644 views

i am currently using GPT-3 and i am trying to compare its capabilities to related language models for my masters thesis. Unfortunatly GPT-3 is an API based application, so i am not really able to ...
Fabian's user avatar
  • 83
3 votes
0 answers
239 views

I'm using the seededLDA package to do an LDA topic model. However, all of the packages and functions I've found to compute perplexity, log likelihood, exclusivity, etc (and other diagnostic tools) don'...
Daniel Casey's user avatar
0 votes
2 answers
3k views

Given the formula to calculate the perplexity of a bigram (and probability with add-1 smoothing), Probability How does one proceed when one of the probabilities of the word per in the sentence to ...
axelmukwena's user avatar
  • 1,159
2 votes
0 answers
438 views

A few days ago I finished writing a word prediction program that tests both LSTM and GRU models on a given dataset. I test 4 models - 2 LSTM models and 2 GRU models. I wrote the program on Google ...
Guy's user avatar
  • 163
0 votes
0 answers
756 views

I am trying to calculate the perplexity score in Spyder for different numbers of topics in order to find the best model parameters with gensim. However, the perplexity score is not decreasing as it is ...
blackmamba's user avatar

15 30 50 per page