Jump to content

Talk:Prime number theorem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Miscellaneous

[edit]

Not sure how to change that myself but the mathematical notation here is wrong: http://en.wikipedia.org/math/a2184dbe1f8958ef9bf635237ddf9573.png It should not be the approx symbol (2 waves) but a symbol with 1 wave. Strictly speaking the approy symbol is wrong, since Pi(x) is _not_ approximated x/ln(x), but the only the limes of Pi(x)*ln(x)/x is 1, which is a different statement. see also : http://mathworld.wolfram.com/PrimeNumberTheorem.html


Could someone put a proof of the Prime Number Theorem here?

Paul Erdos, the legendary genuius, was the first to provide an 'elementary' proof of the prime number theorem. Should this be added?

I don't think so. Any non-elementary proof requires considerable background and machinery from complex analysis, and the proof runs several pages. The "elementary" proof is even more difficult to follow, requires lots of preliminary estimates and results, and the proof is several pages long.
^that doesnt answer the question. Over the past few weeks I have attempted to clarify a detail in the section entitled 'Elementary proof' because it seemed to dismiss Erdos's contribution. Yet my edits have been reverted by two others in the meantime, even though the corresponding information on Erdos's wiki page properly cites the collaboration between Erdos and Selberg. I tend to believe that Erdos deserves more credit than he gets, mostly because his account of the dispute seems less petty. Fhue (talk) 04:43, 5 June 2009 (UTC)[reply]

"The constant involved in the O-notation is unknown." Which constant?

I assume this talks about the inherent constant factor that comes with O-notation. The line about Erdös (not Erdos!) should probably be added, but without the "the legendary genius" part. -- Schnee 12:22, 5 Sep 2003 (UTC)

I'd like to see added: methods for calculating pi(x), other than brute force. Bubba73 05:25, 18 Jun 2005 (UTC)


"The Selberg-Erdős work effectively put paid to the whole concept" - is "paid" a wrong word? Bubba73 05:28, 18 Jun 2005 (UTC)

^should be "..put rest to.."

Correct TeX

[edit]

Do not write log\ x. That looks like this:

Instead write \log x. That looks like this:

Use of backslashes in \log, \exp, \sin, \cos, \max, \min, \det, \lim, \sup, \inf and a number of other math operators has at least three effects:

  • log and the like don't get italicized as if they were variables;
  • proper spacing is automatic. How much space is needed depends on the context, but all of the standard conventions in this regard are built in to the TeX software;
  • in some cases, e.g., \lim, \max, etc., the supscript will appear directly below the operator, rather than below and to the right, thus:

Michael Hardy 23:02, 21 Jun 2005 (UTC)

Thanks, I'm the one that used log\. (I started with TeX less than a week ago.) Bubba73 23:51, 21 Jun 2005 (UTC)

Logarithmic integral

[edit]

There ae some subtle problems for the logarithmic integral in this article: first, the asymptotic expansion is for the reglar logarithmic integral function and not for the offset logarithmic integral. To be consistent with the notation in other articles in WP, and with Abramowitz & Stegun, we need to change Li to li throughout this article. Sooo... instead of being bold, I thought I'd ask here ... who put Li in there? is there a well-known number theory book using this notation? linas 21:24, 21 December 2005 (UTC)[reply]

p.s. in case there's confusion about the asymptotic expansion, one has li(x)=Ei(ln x), with Ei the exponential integral, and a detailed derivation of the asymptotic expansion for li(x) is given in the article on aymptotic expansions. However, both this article and the Riemann hypothesis is using Li(x). linas 21:31, 21 December 2005 (UTC)[reply]
Linas, I don't understand you. Why do you want to change Li to li? I think li = logarithmic integral = int_0^x ln t dt and Li = offset logarithmic integral = int_2^x ln t dt; the article uses Li because it is about the offset logarithmic integral. I'm writing from my parents so I can't really check anything. Then, which asymptotic expansion are you talking about? The expansions for the regular logarithmic integrals and the offset logarithmic integrals are the same as they differ only by a constant term, which is smaller than the terms in the expansion. Am I making any sense? -- Jitse Niesen (talk) 21:51, 21 December 2005 (UTC)[reply]
Yes, well Li(x) ≈ li(x)-1.05... so by using big-O notation, one can drop this constant, and at this level, the asymptotic expansions are "the same". But if you work the details, the quoted asymp expansion is really for li(x) not Li(x), and if taken literally, will give you accurate estimates that are off by 1.05... however, maybe this isn't important. linas 22:59, 21 December 2005 (UTC)[reply]
"the quoted asymp expansion is really for li(x) not Li(x)" — you seem to be using some intuition here that I do not have.
No, not at all; I merely went through the actual derivation of the asymp expansion the other day (for an unrelated reason), and in the derivation, its li not Li that comes out. (Its half-a-page of paper total, the bulk of the work is already done in the example in asymptotic expansion).
"accurate estimates that are off by 1.05..." — did you check this? Where do you truncate the expansion?
I'm merely noting that li(2)=1.05... no I did not check the asymp expansion numerically. The "rule of thumb" for truncating asymp. expansions to get the best accuracy is to cut them off at the smallest term. But you are right, after such a cut, the result could be off by 10 or 100 so arguing about 1.05 is silly, so my bad. Although, I'm now motivated to actually plug in a few numbers ...
If you want, you can move the expansion a bit up and say it is the expansion for li. The whole article could use a check; for instance, it now seems that the table uses li(x) instead of Li(x). -- Jitse Niesen (talk) 14:29, 22 December 2005 (UTC)[reply]
Maybe. If I do anything here, it'll probably be minimalist. I'm trying to work on something else. linas 16:21, 22 December 2005 (UTC)[reply]


that asymptotic expansion is the indefinite integral without the integration constant. it is clear that the calculation in the lower limit 2 has an explosive divergence. the value is 0 if the lower limit is 0, but the error terms of the form int(0,x>1) dt/ln(t)^(2n) are divergent and so using this formula for li is suspect about the confusion between li and Li, the sum rule sum(m) mu(m)/m means that in the definition of the Riemann function R(x) [that appears just below] these two functions work equally well Pietro — Preceding unsigned comment added by 2A00:1620:C0:64:21C:61FF:FE03:A4C (talk) 10:52, 20 December 2012 (UTC)[reply]

please excuse brain dump

[edit]

Please someone make sense of the following brain dump and work it into the article.

I haven't studied this stuff for years now, but from what I recall, Riemann's prime counting function J(x) (see the article on prime counting function) is in some sense a much more natural candidate than to be approximated by . I can't remember exactly why; I think it has to do with the fact that J(x) is closely connected with which in turn is the summatory function of the von Mangoldt function which has very nice convolution properties.

J(x) can be defined (with caveats -- see prime counting function) by

Then by mobius inversion you get

(There are only finitely many terms in this sum.) So if J(x) is approximated well by Li(x) then you would expect to be approximated well by

whose leading term is just Li(x). There are issues of convergence to worry about here; I haven't thought through it very hard. Also I'm not sure whether you should be using Li or li. Also I'm not sure if R(x) is standard notation for this. And I don't have any references.

Anyway, if you shove this through maple or something, you get the following values of R(x) for x being successive powers of ten, truncated using the "floor" function:

4, 25, 168, 1226, 9587, 78527, 664667, 5761551, 50847455, 455050683, 4118052494, 37607910542, 346065531065, 3204941731601, 29844570495886, 279238341360977, 2623557157055977, 24739954284239494, 234057667300228940, 2220819602556027015, 21127269485932299723, 201467286689188773625, 1925320391607837268776...

I'm not an expert at numerical evaluation, so I wouldn't be surprised if these were off by one somewhere. I tried with varying number of terms and lots of digits of accuracy to check that the answers were stable.

If you compare this to the table in the article, you'll see it's much more impressive than the other estimates, especially for small x. (The difference even changes sign.) However, I believe I read somewhere that it has been proved that in the long run it's asymptotically no better than just using Li(x), I'm guessing because there are indeed plenty of zeroes of with real part 1/2, and the contributions from these are at least as big as the "correction" terms for n at least 2. Dmharvey 02:01, 25 December 2005 (UTC)[reply]

I read a paper that mentioned the goodness of fit for R vs Li. The error term for the approximation of pi by Li is larger than the main correction term , so there can be no improvement in that asymptotic sense. I think it's been proved that
for most n, though, so at least in that sense it's better.
You know, we should probably have a page for Riemann's R function as a place for these sorts of notes, rather than cramming them into other articles. There ends up being more than one realizes to say about the function!
And for what it's worth, my calculations agree with yours on the values of Riemann's R.
CRGreathouse (t | c) 12:48, 4 October 2007 (UTC)[reply]


about the long run, you should have read something like "sometimes Li is better than R"; this is a natural consequence of that fact that the difference pi-Li (pi being the exact) changes sign infinitely many times; around these changes Li is clearly better than R pietro — Preceding unsigned comment added by 2A00:1620:C0:64:21C:61FF:FE03:A4C (talk) 12:02, 20 December 2012 (UTC)[reply]

Elementary Stuff

[edit]

This article is fairly complex, as it should be given the nature of the topic. I've been thinking about this problem on a very basic level.

According to Gauss, every number can be expressed as the product of primes. For example, 14 is (2x7) both of which are primes.

Therefore prime numbers are the building blocks for other numbers.

An easy way to understand why prime numbers decrease in number as the number grows follows.

Take a few prime numbers, say 2,3,5, and 7. With just these numbers you can create the numbers 2x3=6 .. 2x5=10 ..2x2x3 = 12 2x7 = 14 .. 2x2x3 = 12 etc

It becomes obvious that a new prime number will occur whenever a combination of old primes is not possible leaving a gap in the set of all numbers.

If we want to know how many primes there are, we might think in reverse. How many combinations of old primes are there? As the number of these combinations increases, they fill up more and more of the number set. This explains why the number of primes is asymptotic as x grows larger.

Going back to the original question: how many prime numbers are there?

Once again look at a few simple primes say, 2,3 and 5.

The number 2 can be used over and over again to create new numbers.

2+2 = 4 .. 2+2+2 = 6 .. 2+2+2+2 = 8

(2x2 = 4) (2x3=6) (2x4=8)

All of these numbers can be expressed using factors and multiplication. They are all non-prime numbers because they are made up of more than a prime and one.

By looking at the number 2 we can see that at least 1/2 of all numbers are non prime.

Then if we examine 3.. we can see that it's responsible for 1/3 of all number being non-prime.

However, we can't simply add 1/2 and 1/3 to increase our total of non-prime numbers, because some of them overlap (for example 2x3=6 which is 2+2+2 or 3+3).

This sort of examination is similar to the sieve method. - BringCocaColaBack

I'm not sure what you are trying to accomplish here. Euclid already knew of (some very close to) the fundamental theorem of arithmetic and the infinitude of primes. Gauss certainly wasn't the first one to prove that.Dragonfall 04:25, 3 March 2007 (UTC)[reply]
BringCocaColaBack: where you end up with your halves and thirds is at 1/2 + 1/3(1-1/2) + 1/5(1-(1/2 + 1/3(1-1/2))) + (a term roughly twice the size of the previous term for every new prime) = 1. That could be put more simply as SIGMA(x) = 1, where x_n = 1/p_n(1-x_(n-1)), with x_n and x_(n-1) the nth and (n-1)th terms of this, p_n the nth prime number and x_0 = 0. I don't see, though, how to use that. Divisibility patterns including only a number of primes are periodic and the number of missing numbers (candidates for primacy) is exactly calculable for that period, but that period is the size of the nth primorial for all primes up to the nth prime, i.e. a number much larger than the range in which only these primes have to be considered as candidates for divisibility. Livedevilslivedevil

Graph

[edit]

I noticed the graph on the Prime Number Theorem page is wrong, and I don't know who to contact so I am posting this here: the blue line should be Li(x) and the red one pi(x), and not the other way around, as for small values of x, Li(x)>pi(x). Thanks

I'm not sure, but it might have something to do with some disagreement that's been going on about the correct definition of Li(x), and whether it's really the definition of li(x), and which one should be used in the article. (They differ by some small constant.) Dmharvey 19:50, 15 April 2006 (UTC)[reply]

Yes, they differ by 1.045 but that is negligible. I think the graph is just wrongly marked...

Am I missing something? The graph is not "marked" ! There is no indication whatsoever of which color line represents which function. Is there some common knowledge that everyone but me holds which would allow them to know at a glance without a label which curve is which?
Also the "thumbnail" version is impossible to see. When I clicked on it to get the larger version I lost all information about it. Could someone edit the information on the large image to say what it is?
Nwbeeson 21:16, 30 January 2007 (UTC)[reply]
When I click on the graph, I get a large image with colors explained below. The small image seems a little useless. Maybe it should say "Click for large image and explanation" or something like that. PrimeHunter 00:18, 31 January 2007 (UTC)[reply]

Dusart

[edit]

The 1999 Dusart paper gives stronger asymptotic bounds than that given.

Article:

On p. 414 §4:

The article's result is correct, but the latter result is stronger for k > 13197. I'm choosing to leave this off because the extra complexity isn't needed here, but if someone feels it's worthwhile they can add it.

At the moment I'm trying to read his thesis Autour de la fonction qui compte le nombre de nombres premiers to see if there are any other results along these lines. My French isn't very good, but I'll see what I can find. Pages 30–32 cover it, so I just have a bit to wade through. CRGreathouse 11:50, 25 May 2006 (UTC)[reply]

Another interesting improvement, given in arxiv:1002.0442, is the lower bound π(x) ≥ (x / ln x)(1 + 1 / ln x + 2 / ln² x). — MFH:Talk 00:43, 23 March 2023 (UTC)[reply]

Table of π(x), x / ln x, and Li(x)

[edit]

The first sentence in the section is, "Here is a table that shows how the three functions π(x), x / ln x and Li(x) compare:" but the table does not have any column labeled "x/ln(x)" nor "Li(x)".

Am I expected to solve the equations at the top of the columns to determine which column is "x/ln(x)" and which is "Li(x)" ?!?

OR is there a mistake in that sentence?

Nwbeeson 21:20, 30 January 2007 (UTC)[reply]

x/ln(x) and Li(x) are not shown. The interesting thing is their difference to pi(x) so only that is shown. I modified the explanation. PrimeHunter 00:18, 31 January 2007 (UTC)[reply]
Sorry for resurrecting an old discussion. While x/ln(x) is certainly significant, wouldn't "x/(ln(x)-1)" be more so, since it is significantly closer to π(x) than x/ln(x) is? Glenn L (talk) 01:16, 24 July 2013 (UTC)[reply]
x/ln(x) is by far the most famous approximation, it's the "simplest" approximation which is asymptotically correct, and it's used in the prime number theorem, the subject of this article, so replacing it would be odd. x/(ln(x)-1) is closer than x/ln(x) for the small numbers listed here but far from as close as li(x) which is also listed. Note: 1/ln(x) is better than 1/(ln(x)-1) for approximating the chance that a random x is prime. But when counting all primes below x for a relatively small value like 1024, x/ln(x) suffers from many numbers being so much below x that the primality chance drops significantly. x/(ln(x)-1) is a way to compensate for this drop without needing the more "advanced" function li(x). PrimeHunter (talk) 02:09, 24 July 2013 (UTC)[reply]

This table is out of date. A more up to date version is at Prime-counting_function#Table_of_π(x),_x_/_ln_x,_and_li(x) MathPerson (talk) 21:19, 1 April 2019 (UTC)[reply]

Relative error language

[edit]

I just restored the relative error formulation of the theorem, since that seems to me to be the most intuitively accessible. Clearly, lim f(x)/g(x) = 1 is equivalent to lim |g(x) - f(x)| / |f(x)| = 0, and the latter is the relative error. AxelBoldt 16:38, 4 April 2007 (UTC)[reply]

First paragraph and first subsection are too different

[edit]

According to the introductory paragraph:

1. The PNT states that the frequency of primes near x approximates .

According to the "statement of the theorem" subsection:

2. The PNT states that the total number of primes less than x approximates .

A solid high school math student should be able to prove that 2 is equivalent to:

2A. The PNT states that the frequency of primes between 0 and x approximates .

But proving that 2A and 1 are equivalent is certainly beyond a lot of college math students. Moreover, it requires facts about logarithms that are irrelevant to this page. (By way of comparison, the average value of between 0 and x tends to zero, but the average value of near x does not approach any limit. In other words, frequency near x and frequency between 0 and x are not necessarily equal, although they are equal in this case.)

So why the discrepancy? Yes, I know that some of the editors prefer to state the theorem as 1 and others prefer to state it as 2. But defining the theorem one way, and then suddenly redefining it another way, without saying why they are equivalent, would be inappropriate even in a math textbook -- not to mention on Wikipedia, which presumably has an audience beyond math graduate students.

(To be honest, I suspect that very few high school students could even prove that 2 and 2A are equivalent, because they simply are unable to write a proof, period.) — Lawrence King (talk) 07:51, 23 November 2007 (UTC)[reply]

Well, since Li(x) is a better approximation in general than x/log(x) to the number of primes up to x, I suppose #1 is the 'best' statement of the three. I would prefer to include at least 1 and (2 OR 2a), with OR in its usual inclusive sense, but with some mention that the two (three) imply each other. CRGreathouse (t | c) 07:57, 23 November 2007 (UTC)[reply]
That sounds great to me. I think it is perfectly fine to have multiple equivalent statements of a theorem, as long as the Wikipedia page states the fact that these are equivalent. We don't even need to explain why they are equivalent, as long as we state it. Without that statement, however, it's as if the Mark Twain page had a subsection describing the life of Samuel Clemens, without ever mentioning the connection between the two. — Lawrence King (talk) 22:04, 23 November 2007 (UTC)[reply]

divergent series?

[edit]

Okay, maybe I'm just being stupid, but the article gives this series:

Isn't this clearly divergent for all real, positive x? All its terms are positive, and the size of the terms increases for k>ln x.--76.93.42.50 (talk) 23:22, 1 March 2008 (UTC)[reply]

You should read this as: Li(x) / (x / ln x) = 1 + 1/ln x + 2/ln² x + ... = 1 + o(1 / ln x), wherever you stop the series. — MFH:Talk 00:49, 23 March 2023 (UTC)[reply]

Ah, okay, the Logarithmic integral function says it's divergent, but claims that it's still useful!? Clear as mud to me!--76.93.42.50 (talk) 23:25, 1 March 2008 (UTC)[reply]

Just like the power series for the gamma function, the series for the logarithmic integral diverges. But any finite truncation gives a good estimate of the value, with more terms giving a better estimate. But for a given value, the series can give only so much precision -- too many terms will give unusual values. CRGreathouse (t | c) 00:10, 2 March 2008 (UTC)[reply]
You say: "with more terms giving a better estimate." This is true, up to a point. I have found (by experiment) that the value of the integral is best approximated if you take [a.log(x)] as the maximum number of terms in the series (where a is near to 1). If you take more, the approximation starts to be worse, and a little later the error (and the series) suddenly shoots up. I think a much better approximation of the Li(x) function would be got by replacing the infinite sign at the top of the series by [a.log(x)]. Alfonsec (talk) 12:30, 4 December 2015 (UTC)[reply]
that series is the expansion of the indefinite integral and has an explosive divergence in the lower integration limit. Its correct use is the estimation of the number of primes in the interval (min,max) with a large min (albeit large depends on the context).
it is interesting to note that using li one correctly predicts that 2 is a prime and fits pleasantly the lowest primes. this is irrelevant in view of the asymptotic aims, but is ... pleasant? pietro188.14.97.4 (talk) 16:50, 22 July 2016 (UTC)[reply]

Errata?

[edit]

comparing table this article cs that for Prime-counting function, there are differences, I do not know which is correct. For example, at 10E23, one lists 1,925,320,391,606,818,006,727 versus 1,925,320,391,606,803,968,923 in the other ...--Billymac00 (talk) 03:38, 28 May 2008 (UTC)[reply]

Good catch, I fixed it. 1,925,320,391,606,803,968,923 is correct (well, ±1 anyway). CRGreathouse (t | c) 13:17, 28 May 2008 (UTC)[reply]
Wow, that went back to the beginning of the table in 2005: [1] CRGreathouse (t | c) 13:32, 28 May 2008 (UTC)[reply]
The problem is disagreeing computations. [2] made two computations in or before 2001 and got 1925320391606818006727 (the old number in this article) in the first, and 1925320391606819893167-1886441 (1 less) in the second. They apparently never found the cause of the difference and didn't publish an "official" count. Silva's tables at [3] reported 1925320391606803968923 (14037803 less) around 6 years later. As far as I know there has been no independent verification but Silva's count is now accepted by major sites like MathWorld, Prime Pages and OEIS (some of which listed the old count ealier), so it makes sense for us to also accept it. PrimeHunter (talk) 21:57, 28 May 2008 (UTC)[reply]
Yes, I really should have explained that here. I came to the same conclusion. Actually, that might be worth mentioning in the article itself. CRGreathouse (t | c) 14:15, 29 May 2008 (UTC)[reply]

Under the "Statement of the theorem", after saying pi(x)\sim x/ln(x), it says that the behaviour of the difference is very complicated and is related to the Riemann hypothesis. But surely if we think of pi(x)-x/ln(x)=(pi(x)-li(x))+(li(x)-x/ln(x)), then the second term there is the main one and has the asymptotic series (given later in the article) starting with x/(ln(x))^2. The Riemann hypothesis is what affects the first term, pi(x)-li(x), but that term is asymptotically negligible compared to the second. Fathead99 (talk) 17:11, 23 January 2009 (UTC)[reply]

Beautifully written article

[edit]

Kudos to whoever the main writer was on this article. As an interested non-mathematician, I found it refreshingly clear and well-written. —Preceding unsigned comment added by 128.125.52.176 (talk) 22:33, 17 March 2009 (UTC)[reply]

At the moment, 2019, it has been written by 360 editors. I am not sure any one of them is the main editor. — Preceding unsigned comment added by 2A00:23C4:7CA2:A401:4D0F:322:B337:EC52 (talk) 16:14, 10 July 2019 (UTC)[reply]

Some Pedantry

[edit]

The second paragraph is roughly speaking indeed! The PNT, as stated, is a limit (or asymptotic) result. As such, by elementary properties of limits, it carries no information at all about any finite part of the 'sequence' that leads to it. The PNT does not tell you ANYTHING about any specified value of x. Of course, we know from other sources (like the inequalities stated) that 1/ln(x) is a plausible estimate for the prime density, but that does not come directly from the classical form of the PNT.

Incidentally, if you know what it means to pick an integer at random please let me know! If I pick 10 integers at random what is their expected mean? Try infinity. (I'm assuming that you are thinking of a uniform distribution.) It's a form of words that we often use casually (as in: probability of two randomly chosen positive integers being coprime is 6/pi^2) but it doesn't make a lot of sense unless embedded in a limiting process. Jpulham (talk) 15:40, 7 November 2009 (UTC)[reply]

Explicit formula

[edit]

The explicit formulae for via the sums over zeta zeroes, that are given in this article and in Explicit formula, differ by a term , so (at least) one of them is wrong. I'm afraid I don't know yet, which one. Burivykh (talk) 13:28, 15 November 2009 (UTC)[reply]

Checked. An error is here: see MathWorld. Correcting. Burivykh (talk) 15:57, 15 November 2009 (UTC)[reply]
Perhaps it's the term corresponding to the trivial zeroes, as they are exactly in the negative even numbers; MathWorld says "sum over non-trivial zeroes". So most probably it's OK both here and there... Burivykh (talk) 16:08, 15 November 2009 (UTC)[reply]
Right. — Emil J. 11:46, 16 November 2009 (UTC)[reply]
OK, thanks! Burivykh (talk) 19:31, 17 November 2009 (UTC)[reply]

Base integer used for prime number investigations

[edit]

In the base 10 system practically all the prime numbers are of the category 6N+/-1. However, it is possible to combine those two categories of numbers by using a different numerical base system. And which base system is that? Why the base 2 system, of course! In the base 2 system, no number ending in 0 could be a prime number, So the question becomes which is the biggest base 2 number (ending in 1), that cant be divided by a smaller base 2 number ending in 1? Is there a simple formula or program for determining that?WFPM (talk) 22:15, 14 August 2010 (UTC)[reply]

This investigation procedure would reduce the ending number category of potential prime numbers from 4, Namely 1, 3, 7, and 9 ; (40% of the integers), to just the number 1; (50% of the integers) but the elimination procedure might be easier?WFPM (talk) 01:36, 15 August 2010 (UTC)[reply]

Seeing as how the calculators are using binomial (base 2) math in their calculations, maybe that's already the way they do it. What about that?WFPM (talk) 01:48, 15 August 2010 (UTC)[reply]

I think you may have misunderstood something. Whether a number is prime or not does not depend on the base that is used to represent it. There is no "biggest base 2 number ending in 1 that cannot be divided by a smaller base 2 number ending in 1"; if there were, it would be the largest odd number that cannot be divided by a smaller odd number and therefore it would be the largest prime number, but we know that there is no largest prime. Gandalf61 (talk) 07:44, 15 August 2010 (UTC)[reply]

Hmmmh! Interesting!! But except for 2 there cannot be any even prime numbers. And all the rest of the numbers are odd numbers, so any prime number (other than 2), has to be an odd number, and I was just trying to get them into only 1 category of ending number. And that happens in the base 2 numbering system. And now I'm stuck with the mental problem of how to get the square root of a binomial number! C'est la vie. And my Casio does it so easy...WFPM (talk) 14:08, 15 August 2010 (UTC) PS: And we don't know what the largest number is yet either. But we do know that the largest prime number will be an odd number.[reply]

The expression is "c'est la vie"; it's French ("that's life" would be a passable translation). There is no largest integer, and likewise no largest prime. It's easy to take square roots in binary -- at least as easy as taking them in decimal, at least. CRGreathouse (t | c) 16:38, 15 August 2010 (UTC)[reply]

Yes I'm sure, but I don't know what it is yet. And say the number line list of binomial numbers was marked out on the side of intervals on an infinite length railroad track. Then the ones ending in 1 would be the odd numbers (50%). Then with 1 iteration of a deletion process of the 1/6th of them that are divisible by 3 (binomial 11) I would have the possible prime numbers down to only 1/3rd of the total, which is that proportion of the 6N+/-1 numbers. Is there a faster way to delete non-prime numbers, or am I going about it wrong?WFPM (talk) 19:12, 15 August 2010 (UTC)[reply]

This isn't really the right place for this discussion -- maybe Wikipedia:Reference desk/Mathematics? Aside from that, I'm not sure where your questions are headed. Yes, sieving works just fine, regardless of which base you use. (Also, I assume you mean "binary" when you say "binomial"?) CRGreathouse (t | c) 21:25, 15 August 2010 (UTC)[reply]
Okay. But my method has the result of clearing up an increasing percentage of any of the odd numbers being a prime number up to the value of the square of the largest known prime number, which is quite a reach of values, and I thought it might be worthy of note.WFPM (talk) 22:20, 15 August 2010 (UTC)[reply]

Bounds for primes in arithmetic progressions?

[edit]

For primes in arithmetic progressions, is there a more explicit statement than just the asymptotic one? I.e, can "reasonable" bounds be given such that

for all ?--Hagman (talk) 22:09, 8 May 2011 (UTC)[reply]
I don't know of any explicit bounds, but you can check
A. Page. On the number of primes in an arithmetic progression. Proc. London Math. Soc. (2) 39 (1935), 116-141.
and papers citing it for (weak) but technically effective bounds. Most of the improvements (Siegel 1935 etc.) are ineffective in principle, so not useful to you.
CRGreathouse (t | c) 03:10, 9 May 2011 (UTC)[reply]

Prime number equivalent statement

[edit]

Could someone revise this statement? The prime number theorem is equivalent to the statement that the nth prime number pn is approximately equal to n ln(n), again with the relative error of this approximation approaching 0 as n approaches infinity. I expect it should be n/ln(n).--Almuhammedi (talk) 13:32, 23 February 2013 (UTC)[reply]

The statement is correct. Note that it is about the nth prime number and not about the number of primes below n. The nth prime number is clearly larger than n. For example, the 1,000,000th prime number is 15,485,863, while the number of primes below 1,000,000 is 78,498. PrimeHunter (talk) 13:56, 23 February 2013 (UTC)[reply]

Ending figure (something different)

[edit]

Are there any theories of what ending figure that is most common among primes - 1, 3, 7 or 9 ? Or is it proven that there are equal amounts of of these ending figures, when moving towards infinity ? Boeing720 (talk) 21:35, 11 September 2013 (UTC)[reply]

Each of them occurs in asymptotically 1/4 of all primes, but due to Chebyshev bias, 1 and 9 should be slightly less common than 3 and 7.—Emil J. 12:27, 12 September 2013 (UTC)[reply]

|Pi(x)-Li(x)|

[edit]

"The constant involved in the big O notation was estimated in 1976 by Lowell Schoenfeld:[11] assuming the Riemann hypothesis,

for all x ≥ 2657."

For the better estimate we would write:


!? JLove, 09-20-2013.

200 quadrillionth prime number?

[edit]

At Approximations_for_the_nth_prime_number there is a statement of the 200 quadrillionth prime number. My question is short scale or long scale? Because of this confusion I suggest the use of scientific notation. John W. Nicholson (talk) 22:29, 20 November 2013 (UTC)[reply]

It says "Again considering the 200 quadrillionth prime number 8512677386048191063". A long scale quadrillion is 1024 and 8512677386048191063 only has 19 digits so it's clearly short scale. Wikipedia:Manual of Style/Dates and numbers#Large numbers says billion and trillion are understood to be short scale, but doesn't mention even larger numbers. "Again" in the quote refers to the earlier mention of 200 quadrillion in Prime number theorem#Statement of the theorem where it says 200 · 1015. I think we are OK but if quadrillion isn't used the second time then it shouldn't be used the first either. It would be odd to first refer to a number by name and later by scientific notation when the same example is used. PrimeHunter (talk) 23:14, 20 November 2013 (UTC)[reply]

Estimate for n-th prime

[edit]

The PNT implies the estimate for the n-th prime. It is proposed to replace this with : is there a reason for this more complicated version? Is it supported by reliable sources? Deltahedron (talk) 20:16, 31 July 2014 (UTC)[reply]

Well, by the prime number theorem we have the more direct version . It is "supported by common sense", and so I have no doubt thousands of "reliable sources" must exist. Of course one will prefer the standard version , if one is rather interested in the behavior of with respect to (which is often the case). Sapphorain (talk) 21:36, 31 July 2014 (UTC)[reply]

I realize the standard simple approximation has been the tradition ...we may leave it as is.

easily leads to a better approximation:

The article does mention even a better approximation along the same lines. Whyes19 (talk) 23:00, 2 August 2014 (UTC)[reply]

Do you have a reference for that? I'm not disputing it, but if you want to insert it into the article it must, as always, be verified by citing a reliable source. Deltahedron (talk) 06:40, 3 August 2014 (UTC)[reply]

I quote frm the same wiki article on Prime Number Therem we are discussing:

Approximations for the nth prime number

[edit]

As a consequence of the prime number theorem, one gets an asymptotic expression for the nth prime number, denoted by pn:

A better approximation is

[1]

Again considering the 200 · 1015 prime number 8512677386048191063, this gives an estimate of 8512681315554715386; the first 5 digits match and relative error is about 0.00005%.

Rosser's theorem states that pn is larger than n ln n. This can be improved by the following pair of bounds:[2][3]

Whyes19 (talk) 00:53, 4 August 2014 (UTC)[reply]

But it is incorrect. The article itself says that the asymptotics are derived empirically. But it is not possible to empirically derive a o(1/ln²n) term. Look: how these asymptotics are represented here, it means that the number 11 which appears in the numerator of the last fraction can neither be replaced by 11+ε nor by 11-ε for any ε > 0, since then the difference ε/ln²n would be larger than o(1/ln²n). It makes no sense. Tommy Rene Jensen (talk) 21:29, 27 November 2023 (UTC)[reply]
Are you claiming that Rosser's theorem is incorrect? It's obviously compatible with the asymptotics given, and the asymptotics prove that the equality is eventually true in any case. 'All' Rosser needed to show was that it holds for n >= 6 rather than some unspecified N. - CRGreathouse (t | c) 04:11, 28 November 2023 (UTC)[reply]

References

  1. ^ Ernest Cesàro (1894). "Sur une formule empirique de M. Pervouchine". Comptes rendus hebdomadaires des séances de l'Académie des sciences. 119: 848–849. (in French)
  2. ^ Eric Bach, Jeffrey Shallit (1996). Algorithmic Number Theory. Vol. 1. MIT Press. p. 233. ISBN 0-262-02405-5.
  3. ^ Pierre Dusart (1999). "The kth prime is greater than k(ln k + ln ln k−1) for k ≥ 2" (PDF). Mathematics of Computation. 68: 411–415.

Primes Distribution Image Is Wrong

[edit]

I recently noticed that the image isn't quite right. I don't know if it was because of the method or software used to create it that didn't have much precision at drawing a single pixel, but it printed some extra dots (which represent primes) and also some others that are missing. It can easily be checked by looking at the upper left corner of the image where it begins. It shows as primes: 2, 3, 4, 6, 7, 9, 10, 12, 13, etc.

By that reason I feel that it would be better if that image is omitted or changed. I also offer an image that I made myself as a replacement for the actual image. However, I couldn't upload it to Wikipedia because I'm not autoconfimed yet. The best way of uploading it that I thought of was doing it to MEGA. The link is below. I'm sorry for not having a better way to show the image. Also, if it is too wide I can rearrange it, so it fits any other width as desired. The width that I used was just because it is equal to the fifth primorial (2 to 11) so the primes line up better.

Primes_distribution_up_to_19_primorial.png

--Dv-id.061 (talk) 07:06, 19 January 2015 (UTC)[reply]

I proceeded to update the file in Wikimedia Commons. The only change that has to be done now is to delete the - 1 after each number in the description which says, "top-right corner is 11# (2310) - 1", and "bottom-right corner is 19# (9,699,690) - 1". — Preceding unsigned comment added by Dv-id.061 (talkcontribs) 08:19, 19 January 2015 (UTC)[reply]
@Dv-id.061: Your version rendered completely white at the resolution in the article. I have reverted commons:File:Primes - distribution - up to 19 primorial.png to the former version. The description says "only odd numbers are shown", so it would be impossible to display the even composites you list. The image is 1155 pixels wide because 2310/2 = 1155. PrimeHunter (talk) 03:06, 19 August 2015 (UTC)[reply]


There is nothing wrong with the "Distribution of primes up to 19#" image. It seemed odd to me too with those large white columns, then I looked to it and realized that the png shows the odd numbers only (maybe this should be specified in the subtitle somehow). Even numbers are not shown (therefore, the prime 2 is missing, but this is not relevant). — Preceding unsigned comment added by LaurV (talkcontribs) 06:08, 16 November 2018 (UTC)[reply]

Even if the image is correct in every pixel, it is almost entirely useless. You cannot use it to read off individual primes. It is also not clear from the image that the primes are thinning out as the numbers increase. I would recommend either deleting it entirely, or making the limit be small enough that you can glean useful or interesting information from the image. MathPerson (talk) 17:14, 1 April 2019 (UTC)[reply]

So, are there any objections to deleting this image? If there are objections, could someone explain what information can be gleaned from this image? MathPerson (talk) 14:10, 2 June 2019 (UTC)[reply]
Final request/final notice: I really hate to delete things from an encyclopedia. However, since April 1, there have been no objections to deleting this image, and nobody has indicated that it has any value. So, I'll delete it on July 6. MathPerson (talk) 22:50, 4 July 2019 (UTC)[reply]
Done MathPerson (talk) 00:17, 9 July 2019 (UTC)[reply]

Intuition

[edit]

Hi - this line (from the beginning)isn't clear: "It formalizes the intuitive idea that primes become less common as they become larger." (My emphasis for intuitive). There is really nothing intuitive about the idea that there should be less primes as the numbers get larger.

Gene Klein — Preceding unsigned comment added by 209.65.177.129 (talk)

It can vary what people consider intuitive but I think the large majority of people who understand the concept of prime numbers would expect less primes among larger numbers when there are more potential factors below the number. It's common to speak of intuitive in such contexts. PrimeHunter (talk) 02:03, 24 February 2015 (UTC)[reply]

Log vs. Ln

[edit]

-- I think the article should be edited to contain ln(x) instead of log(x). ln(x) is more intuitive and is used in the graphs alongside the text. — Preceding unsigned comment added by 77.78.86.161 (talkcontribs) 30 March 2016 (UTC)

I tend to agree. The statement log(10)~2.3026 is slightly confusing at first glance. — Preceding unsigned comment added by 50.90.185.36 (talkcontribs) 26 April 2016 (UTC)
I don't agree. Number theorists almost systematically use the notation "log" for the hyperbolic (or "natural") logarithm (and regarding this Hardy and Wright stated in their introduction to the Theory of Numbers: "log x is, of course, the 'Napierian' logarithm of x, to base e. 'Common' logarithms have no mathematical interest."). It is not the role of an encyclopedia to "correct" a well-established usage. Sapphorain (talk) 22:52, 26 April 2016 (UTC)[reply]
"Common" (decimal) logarithms of course have no mathematical interest, but binary logarithms become more and more important, even if they weren't so when Hardy and Wright wrote their book. That usage might be well established in the number theory, but the WP scope is much broader, so using a notation that is less confusing and is uniform across various topics should be strongly preferred. And while the base does not matter in asymptotics, the definition surely depends on the base if we want to write "". In any case, what is the problem with ? It is unambiguous and universally recognized. — Mikhail Ryazanov (talk) 21:58, 27 December 2017 (UTC)[reply]
There is of course no particular problem with the notation ln. Similarly as there is no particular problem that a text be written in British or American English. However, there is a big problem when bureaucrats appeal to norms fixed by the ISO in order to impose a uniform mathematical notation in all sciences. It just doesn’t work that way, and it won’t ever work that way. I am a number theorist, and in all recent books and articles in my field I consulted recently (with only one single exception I admit), there still is only one logarithm, the natural logarithm, and still only one notation for it, log. It is not the role of wikipedia to teach scientists the notation they should use in their field. And it is not the role of wikipedia either to « translate » into the « authorized » notation their arguments and results. Sapphorain (talk) 20:34, 2 January 2018 (UTC)[reply]

Viewing mathematical textbooks and that "log" and "ln" are being taught, it is of high importance mathematically to ensure which one is being discussed and not left up to chance. I have yet to come across a Trigonometric or higher math book which uses "log" for the natural logarithm. I have not viewed a mathematics dictionary which does not specifically distinguish between these terms. Even just for historical reasons log should not be used to represent the natural logarithm unless a subscript e is used to define the logarithms base. RMoses (talk) 22:57, 31 January 2018 (UTC)[reply]

I'm having trouble parsing your sentence. But I'll point out that which notation is used is not at all important mathematically, as long as it's used consistently. I could denote the natural logarithm of x by bork(x) as long as I made it clear what I was using it for. And even though this is a number theory article, "log" is used very commonly to denote the natural logarithm in all branches of mathematics (see e.g. Conway's Functions of One Complex Variable I, as the first book I happened to grab, which is a common graduate textbook in complex analysis). This article uses "log" consistently, and it makes it clear what's meant in the lead. When more than one style for something is possible, we generally retain the existing one; see MOS:STYLERET for a bit more detail. Your edits were also problematic because you changed the uses in the lead, but not in the rest of the article. –Deacon Vorbis (carbon • videos) 23:47, 31 January 2018 (UTC)[reply]
Wikipedia's purpose is to benefit readers by acting as an encyclopedia, a comprehensive written compendium that contains information on all branches of knowledge within its five pillars.
The above is Wikipedia's purpose statement. It is attempting to be comprehensive and I assume for all or at least a large majority. You quote your book on the use of logarithm a d that these text use log as the natural logarithm. Yet, there are many that use in. Advanced Calculus by Taylor and Mann for one. But if Wikipedia is truly to be comprehensive then the knowledge needs to be written for the majotuy RMoses (talk) 04:38, 1 February 2018 (UTC)[reply]
.... majority of readers and not to a higher level which most would find confusing when this is able to be done. For most Wikipedia will be the first place they they will most likely read anout the Prime Number Theorem. Many will be high svhool students exploring more on prime numbers. Their knowledge as with many college students will be they were shown, taught, and first learned about log being the Common logarithm and kn being the natural logarithm. They are not as if yet been graced with certain books not using common logarithm at all and just using the natural logarithm. This will cause confusion to them on something a fewer group uses on a regular basis. Most people seeing the Prime Number Theorem will most likely go no deeper than what is written in Wikipedia and their simple knowledge between log and in will overall not be effected. Those who do pursue a deeper understanding will at first be confused on the use of log. This is the majority of people I think Wikipedia wants. Those discovering this information for the first time. Those with a higher understanding of the use in higher order math books when just natural log is used will not be hindered at all. Thus using ln versus log is most helpful to the larger users of Wikipedia versus those with the more comprehension and understanding of the term and there use. RMoses (talk) 04:58, 1 February 2018 (UTC)[reply]
Mr. / Ms. Boris you commented to me in your above message:
Your edits were also problematic because you changed the uses in the lead, but not in the rest of the article. –
Does this also apply to the graphs in the Statement index? I would be willing to help change those to the method those within the Wikipedia group would seem most correct. RMoses (talk) 05:19, 1 February 2018 (UTC)[reply]
Mr./ Ms. Deacon Boris,
Please explain why my last edit was not helpful when this actually was in line with your comment above on Jan 31, 2017 23:47 UTC.
Your edits were also problematic because you changed the uses in the lead, but not in the rest of the article.
The note only referenced the info in the graph that was using different terminology than the text. It was a simple note and your comment "Not helpful" is actually contradictory of your comment above since now two different logarithmic terms are being used in the document. It was helpful for those seeing the two different terms for the natural logarithm being used simultaneously. RMoses (talk) 23:42, 1 February 2018 (UTC)[reply]
Please indent your replies. See Help:Talk for help on this and more information about using talk pages. Also, when adding comments, you can click on "edit section". This way, it's clear to people watching the page which section is being commented in.
I can only assume by "Mr./ Ms. Deacon Boris" you're talking to me, but that's pretty clearly not my username. Simply using someone's username (or even an abbreviation thereof will do fine). I'll admit that I'm having trouble understanding some of what you've written. The caption on the image is clear. Ideally, the image would use "log" as well, but it doesn't. Putting an awkward note right at the beginning of the nearby section isn't necessary. –Deacon Vorbis (carbon • videos) 00:50, 2 February 2018 (UTC)[reply]
Why would it not be necessary if it helps the reader understand the information. You even commented to me how important it is to have only one method in the article, yet here are both methods being used with no explanation to the reader. RMoses (talk) 03:37, 2 February 2018 (UTC)[reply]
Again, as I mentioned above, please indent your replies. See Help:Talk for more information. The note you tried to add is awkwardly placed and just draws attention to a minor discrepancy in one image. A user reading the caption on that image can reasonably be expected to notice what's going on. –Deacon Vorbis (carbon • videos) 03:55, 2 February 2018 (UTC)[reply]
I agree with RMoses on this. "ln" is unambiguous, it always means "loge". "log" may mean "log with an unspecified base", or "log with a base specified elsewhere", or (usually, in my experience) "log10", or "loge". Sure, this article makes it clear in the lead which it means. But it would be better to use the unambiguous "ln" throughout the article. Maproom (talk) 09:29, 2 February 2018 (UTC)[reply]

Summary of the previous discussion follows. Please correct me if I missed something.

  • Current notation for natural logarithm is inconsistent: Text uses "log", graphs use "ln".
  • If the notation is changed, it must be done in the entire article.
  • Current notation should be retained unless there is a good reason to change it (MOS:STYLERET).
  • Any notation is equally good, if it is defined at the beginning of the article.
  • The notation, which does not require the reader to read the definitions, is better.
  • Wikipedia should be accessible to audience as broad as possible.
  • "ln" is unambiguous, whereas "log" denotes decimal logarithm on calculators and in high-school maths.
  • Number theory books use "log", because there is no mathematical interest in decimal logarithm.
  • We should not "translate" formulas from sources, although such "translation" may be necessary if different sources use different notations.

I think that the arguments in favor of "ln" are more important, so I made the change carefully everywhere, but got reverted. Is the current state really better? Petr Matas 05:49, 8 November 2018 (UTC)[reply]

Deacon Vorbis: if, despite the discussion above, you still insist on "log", please change all the instances in the article, including those in the graphs. Maproom (talk) 08:23, 8 November 2018 (UTC)[reply]
It is not the role of Wikipedian contributors to decide that a notation is better than the notation which is used by the great majority of books and other publications in number theory, even if the arguments in favor of a change are very good. If and only when the majority of textbooks and articles in the field use the « better » notation, will it be justified to follow suit in this article. To do it now would be giving undue weight to a small minority of publications in the field. Sapphorain (talk) 08:41, 8 November 2018 (UTC)[reply]
This has already been discussed ad nauseam, but here are a few points to add/reiterate: (1) Although this is a number theory article, I don't think this issue is specific to number theory, as I mentioned above. (2) While we shouldn't just go around making up our own notation, there's no reason why we can't translate notation from sources when needed. (3) MOS:STYLERET is king here. This is reasonably analogous to CE/BCE vs. AD/BC (and we all know which one is right! see MOS:DATEVAR). None of the arguments presented are particularly relevant to this page alone and could apply to just about any article. Because of the wider scope, there should probably be wider discussion than takes place on a single article (possibly an RFC). (4) If you want to change the images to be consistent with the article, then go for it. I don't have the technical expertise to do it, but if someone else wants to, that'd be great (and while they're at it, the d's in the integrals should be changed to italic instead of roman to be consistent with the article also); perhaps WP:Graphics lab could help. –Deacon Vorbis (carbon • videos) 13:53, 8 November 2018 (UTC)[reply]
Indeed, this issue may apply to many other articles as well and an RFC could resolve it. I wonder whether any centralized discussions took place in the past. Petr Matas 18:59, 8 November 2018 (UTC)[reply]
Recently, in fact, at Wikipedia talk:Manual of Style/Mathematics#Logarithms?, which points to a much older discussion as well. –Deacon Vorbis (carbon • videos) 19:33, 8 November 2018 (UTC)[reply]

I was writing a paper and somehow I accidentally used ln instead of log; the result appeared to be better on Geogebra. I am convinced the article on which the log instead of ln is based is old or so; the ln seems to fit just perfect on every case. However; I'm just a student and I might be wrong; I'm only asking to try to formulas with log and with ln on a math program and check the results. I used small primes between 7 and 997. Also, my apologies that I didn't read the argumentation above; perhaps it's already explained why that is and why log is still better; I'm just quickly passing by. Akionaio+060 (talk) 19:36, 26 February 2019 (UTC)[reply]

Add ()?

[edit]

I was reading this:

and wondering if:

should be:

or:

?
It should be neither, but just as it is. The first expression makes no sense, and the parentheses in the second one are redundant by convention (see order of operations). Sapphorain (talk) 07:44, 25 October 2016 (UTC)[reply]

Sign switching of Li(x)-π(x)?

[edit]

The image description below the approximations’ absolute error plot states that On the other hand, Li(x) − π(x) switches sign infinitely many times, but the plot clearly shows values of that function stay positive. On the other hand the function fluctuates with x – it generally increases, but not monotonically. So shouldn’t the description state that the first derivative of Li(x) - π(x) switches sign infinitely many times? Silmethule (talk) 09:48, 25 September 2018 (UTC)[reply]

Or does it change signs near the beginning of the coordinate system, for low xs? If so, could it be stated more clearly? Silmethule (talk) 09:54, 25 September 2018 (UTC)[reply]
The first sign change is known to be somewhere before 10^1000 (see [4]), at a place where no representation of the function Lix-pi(x) is available (or calculable). Sapphorain (talk) 10:50, 25 September 2018 (UTC)[reply]

Discrepancy

[edit]

Looking at https://dlmf.nist.gov/27.12, specifically 27.12.6, the sign of the denominator term shows -1/5, whereas the article shows (1/5) (see just after "...for some positive constant a, where O(...) is the big O notation. This has been improved to...". I have no clue which is correct.Billymac00 (talk) 02:52, 6 February 2021 (UTC)[reply]

Is this right?

[edit]

Toward the end of the section Proof sketch this passage appears:

"The next step in the proof involves a study of the zeros of the zeta function. The trivial zeros −2, −4, −6, −8, ... can be handled separately:

which vanishes for a large x."

But neither the left-hand expression nor the right-hand expression "vanishes" for any value of x.

I hope someone familiar with this subject can fix this. 2601:200:C000:1A0:60C7:98D0:A3AE:B606 (talk) 01:34, 6 December 2021 (UTC)[reply]

"vanish" does not mean here "is equal to zero", but "tends to zero". I would however suppress the indefinite article "a" before "large x". --Sapphorain (talk) 07:52, 6 December 2021 (UTC)--Sapphorain (talk) 07:52, 6 December 2021 (UTC)[reply]

Maiga effort

[edit]

This link http://jonkagstrom.com/approximate-primes/index.html provides a useful approximation that seems very competitive, superior to Li().Billymac00 (talk) 02:55, 9 January 2022 (UTC)[reply]

A classial result of Littlewood gives
and so I'm skeptical about claims of tight approximation since the values are known to oscillate around li(x). It looks to me like this c(x) is basically the same as Riemann's R(x).
But if you want to look into it, I think comparing or equally to the error term above would be interesting, where .
CRGreathouse (t | c) 05:44, 10 January 2022 (UTC)[reply]

The first such distribution found...

[edit]

In the introduction, "The first such distribution found" is mentioned but not any others.Chris2crawford (talk) 22:37, 5 February 2022 (UTC)[reply]

Others are mentioned later. Prime number theorem#Prime-counting function in terms of the logarithmic integral says: "So, the prime number theorem can also be written as π(x) ~ Li(x)."
Prime number theorem#Non-asymptotic bounds on the prime-counting function gives . It doesn't explicitly say π(x) ~ but it trivially follows. PrimeHunter (talk) 11:31, 6 February 2022 (UTC)[reply]

Proof via phi(n) removal

[edit]

Hi,

May I know why this proof was censored? 75.25.161.74 (talk) 04:07, 29 June 2023 (UTC)[reply]

I suspect that because it was unreferenced (unsourced) and there was errors in a couple of math formula; some editors prefer to reverts edits instead of fixing them, even when they look mostly good. And the lack of explanation for the removal doesn't help. ... Dhrm77 (talk) 11:06, 29 June 2023 (UTC)[reply]

page is better titled history of prime number counting theorem

[edit]

okay i UNDERSTAND that prime number theorem has a lot of historical background to it, bit that's all this page is?

it's almost impenetrable to a general user about what the prime number theorem is even about because off all the COSNTANT references to how this subject came about historically.

this page needs a serious rewrite, this is absurd 2600:6C47:A03F:C443:60AF:8C36:BD3:F514 (talk) 18:06, 2 September 2023 (UTC)[reply]

Recent work on primes in arithmetic progressions; fixed residue classes

[edit]

Could it improve the article to have some text about new work in the subject of how many primes there are in an arithmetic progression? Maynard discusses his extention of the Bombieri-Vinogradov Theorem in (Primes in arithmetic progressions to large moduli I: Fixed residue classes (arxiv.org)); Dirichlet's theorem shows that the primes are roughly equidistributed for a coprime to q, and the Generalized Riemann Hypothesis (GRH) would imply this equidistribution occurs whenever q is smaller than the square-root of the size of the primes. Also, it is helpful that the Siegel–Walfisz theorem is mentioned, but Bombieri–Vinogradov theorem (1965), which has an article on Wikipedia, is not mentioned. It probably should be mentioned here. Maybe the Maynard work is too advanced, but there seems to be a gap here. Rozenwithaz (talk) 19:14, 6 November 2023 (UTC)[reply]

These two pages have way too much duplicate material. Prime-counting function § Table of π(x), ⁠x/log x ⁠, and li(x) and Prime counting function § Table of π(x), ⁠x/log x ⁠, and li(x) are the most obvious, but there's a lot of duplication (some of which is my fault) in the approximations and bounds on n = π(x) and its inverse x = pn. Maintaining two copies is laborious and error-prone; the duplicates should either be eliminated or shared via WP:SELECTIVETRANSCLUSION. (I'm also considering replaceing x/log x with x/(log x − 1) in that table, but that's a different issue.)

But figuring out how to do the division is non-trivial.

One possibility is to merge them, but they're both long articles already, and the combination might be unwieldy. So I'm assuming we stick with the split.

The basic division is that the PNT is asymptotic and about approximations like Li(x), while exact computations (like the Meissel–Lehmer algorithm, Lagarias–Miller–Odlyzko algorithm, and do on) should go in PCF. Likewise Riemann's R: that's much more about the detailed distribution and zeta than the asymptotic, so it belongs in PCF.

But what about § Approximations for the nth prime number? I put a lot of effort into making it clear, then realized I don't know which article it should go in. I also get blame for not keeping track of which article I'm editing and putting details in both this and Prime counting function § Inequalities. I'd like to consolidate this in one place or another, but it's not super obvious where. It's a lot of the same authors and papers as the π(x) formulae, so PCF might be easier.

Anyway, I'm canvassing for ideas. Thank you in advance for any thoughts. Or edits, if you have a strong sense of what The Right Thing is and want to just make the changes to the main articles. 97.102.205.224 (talk) 01:21, 12 December 2024 (UTC)[reply]

Definitely the formulas need to be accessible from both places, whether via transclusion or linking. I'd slightly prefer the content to be at PCF and linked from PNT but any solution would be fine.
I agree that the tables are a lot, I don't have any useful suggestions. I think that they *are* useful especially for people without an intuition for error terms.
CRGreathouse (t | c) 03:42, 12 December 2024 (UTC)[reply]