Talk:Hyperoperation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

"Hyper-0"[edit]

There is no such operation as "add one". Addition is the most basic operation. The "add one" function, or counting, is merely an example of an arithmetic progression with an initial term and common difference of 1. And even if we look at it as something different, it cannot be called a binary, or arithmetic, operation - the operations of addition - subtraction, multiplication - division, exponentiation - root - (logarithm if you wish) are operations involving two numbers, and the result of the operation depends on both of those numbers. But the "add one" function has only one number input. It's not a binary operation, just as sine, cosine, and other like functions are not. Majopius (talk) 21:44, 9 February 2010 (UTC)[reply]

See Addition or Peano axioms. The add 1 function is normally called the successor function and addition is defined in terms of it. Dmcq (talk) 23:46, 9 February 2010 (UTC)[reply]
You guys are both right in a way. The successor (or "add one") function is very different than the rest of the hyper operations, as Majopius noted, it is not really a binary function (although we treat it that way in this article). Before Peano and his peers, I believe that addition was considered to be a basic operation. However, Peano and others did find successor useful for defining addition axiomatically. it is also a clear extension of the hyper operation series of functions below addition. Your argument is basically the same as same as between people who say that you cannot subtract 5 from 3 ( 3-5 ) and those that say it is -2. Neither are wrong, it just depends on your system. Cheers, — sligocki (talk) 04:24, 10 February 2010 (UTC)[reply]
I still do not agree that there is such an operation. Addition needs not to be defined in terms of a simpler operation - what can be simpler than addition? The "add one" function is, in this sense, merely a special case of addition, and just because it was given a verbal, non-mathematical name does not change its true identity. Majopius (talk) 01:27, 26 February 2010 (UTC)[reply]
You may be biased by language. Consider a visual approach to numbers: consider a number line with sequential marks representing the numbers 0, 1, 2, 3, ... from left to right. Now, the successor function is defined as moving to the next mark to the right. This is clearly a very basic operation. How would you define addition? You could do it by measuring the length from 0 to n going that far past m to get n+m (assuming that they are evenly spaced), but that is rather complicated and you would need some sort of device that measures distance. Alternatively, you could define addition based upon the successor function: to add n to m, start with one finger at n and the other at 0, apply successor to both repeatedly until your second finger is on m, they your first finger is on n+m. This requires only 2 fingers and the ability to move them to the right. This is roughly the reason that we sometimes think of successor as a more basic operation than addition. Cheers, — sligocki (talk) 03:36, 4 March 2010 (UTC)[reply]
What helps clear up this matter is logarithms. A logarithm acts like a mirror image to the number, because everything is the same, but one step lower - when the numbers multiply, their logarithms add. When they divide, the logarithms are subtracted. When a number is raised to a power, its logarithm is multiplied by the power. Furthermore, we get relationships between the two identity elements - 0 and 1: since logarithms reduce each operation a step down, multiplication reduces to addition, and so the logarithm of 1 (the multiplicative identity) is 0 (the additive identity). Finally, the logarithms of multiplicative inverses are additive inverses. So, if the logarithms would behave a certain way when the numbers are added, then this would be a Hyper0 operation. But there is no such formula for the logarithm of a sum (log(x + y)) - at least, it is not the successor function. Majopius (talk) 23:32, 8 March 2010 (UTC)[reply]
You are oversimplifying things, log(a*b) = log(a) + log(b) and log(a^b) = log(a)*b, but this does not work for Hyper-4 (tetration), log(a^^b) is not log(a)^log(b) or log(a)^b or anything like that. Just because log seems to "reduce each operation a step down" for a few examples does not mean that it does for all generalizations. Cheers, — sligocki (talk) 00:33, 14 March 2010 (UTC)[reply]
Furthermore, I do not believe in tetration or any other such things further on. Tetration cannot be consistently defined for all inputs, and is not that necessary. I have made an attempt to define it for non integer heights, but got nothing, and so left that idea. But the idea of defining addition a + b as a + 1 + 1 + ... + 1 is completely wrong. To be perfectly honest, I hate it. Defining multiplication ab as a + a + ... + a, though incorrect, is still bearable, but not "add one". The subsequent operations after addition are defined recursively as a previous operation involving b numbers a. But here addition is defined as b numbers 1 and an a separately. To me, it looks pretty ugly - breaking one summand into ones and making the picture look so asymmetrical. Majopius (talk) 00:57, 26 March 2010 (UTC)[reply]

(unindent) Well, you don't need to consider generalizations of addition and multiplication, but this page is all about those generalizations. I agree, some of these things are not the most pretty or intuitive. But these are how you define hyper operations. Cheers, — sligocki (talk) 21:20, 27 March 2010 (UTC)[reply]

Alright, alright. Those who believe in those strange things like "add one", tetration, pentation, hexation, heptation,... may continue to do so. But I don't - and neither would any mathematician in their right mind. I don't mean to be crude, but the idea I stick to is this: there are two primary operations of two different degrees: addition (degree 1) and multiplication (degree 2). The other two operations - subtraction and division - naturally appear as inverses of addition and multiplication, and have the same degree as their primaries. Next, by iterating multiplication, we get exponentiation (degree 3). Because it is non-commutative (as it is the first operation to be defined recursively in terms of repeated multiplication), it has two inverses - root (degree 3, of course) (however, root is different in that it gives non-unique results depending on its index) and logarithm, whose status as an operation remains questionable due to its nature - "taking the logarithm" does not seem like a meaningful operation - it is interpreted more like a property of the number itself. And plus, it is root that naturally replaces division as a higher operation (for example, when switching from arithmetic to geometric progression), not logarithm. Majopius (talk) 03:10, 28 March 2010 (UTC)[reply]
Any mathematician "in their right mind" would not speak about "believing in" an algebraic function. These functions are not empirical. They exist by definition, not by physical discovery, and they are defined for utilitarian purpose. Hence, there are multiple definitions for the same operation in different contexts (such as how 0^0 is defined differently in different domains). Consider that many branches of math which have no obvious connection to reality are actually quite useful for modeling real processes: complex numbers in electrical engineering, or tetration in computational analysis. Whether you "believe in tetration" is irrelevant, because it has a use. Your point of view has been very rare for over a century and is not relevant to the article. TricksterWolf (talk) 17:32, 21 May 2012 (UTC)[reply]
"But I don't - and neither would any mathematician in their right mind."
Very nearly every mathematician in the past hundred years would say you are the dumb one. Not the people studying tetration
What does it even mean to "believe in" a function? It's not a god, it doesn't save you from your sins or accept sacrifices (although I guess you could give sacrifices to a function). It just maps an input to an output. If you mean by "believe in" that you consider a function in useful: Whether you consider a function to be useful or not is a different question.
But before you go calling mathematicians "out of their mind" bear in mind there are many things that do not look useful at first but in fact are very useful so it's not stupid to study something that's seemingly useless, plus it can be fun. Math can be done just for fun or because you think it might be interesting. It seems Euler knew this well, sure, but some of them are just neat facts and not "I just invented calculus" things. To invent calculus one must come up with some weird ideas such as "infinitesimals". Of course tetration is not defined for all the reals, at least not in a standard way, but it does have a standard definition for all of the integers. At no point in the definition of a function does it say "A function must be defined for all the reals" JGHFunRun (talk) 16:13, 28 October 2022 (UTC)[reply]

Even though this definition was last active a WWII ago (6 years, 1 day), I wish to note that the fun thing about math is that, like science, it doesn't matter whether you agree such an operation exists. It's true whether you accept it or not. I also want to point out that math is... strange in the way existence works: things exist by definition; essentially, if you asked Euclid to explain why all right angles are equal, the answer would be "because fuck you that's why" (pardon my illustrative profanity). Math is done by saying certain things are true without justification and seeing what results; it might not be applicable to the real world, but it isn't wrong.

You can define addition in terms of successor, or you can make it a primitive notion; the former is more popular in abstract mathematics.

(and nevertheless, yes there is a 'plus one' function. Even if you don't like defining addition in terms of it, it still exists.) Hppavilion1 (talk) 18:55, 22 May 2018 (UTC)[reply]

OK, I missed when Majopopus said "I do not believe in tetration". Seriously, tetration is defined right there in the article, so it exists. You might not see the use for it, but it exists because we just showed you a definition. Hppavilion1 (talk) 18:57, 22 May 2018 (UTC)[reply]

"Heptation" listed at Redirects for discussion[edit]

An editor has asked for a discussion to address the redirect Heptation. Please participate in the redirect discussion if you wish to do so. 1234qwer1234qwer4 (talk) 13:24, 17 April 2020 (UTC)[reply]

Lack of generality[edit]

Addition, multiplication and exponentiation are particular operations. Then on we use without any indication whatsoever what ( means, is it concatenation or exponentiation or a totally new operation.

Notice that first formula has the condition which is utterly confusing on all levels, because further on it says that the function is universal. The article is speaking about representing addition, multiplication and exponentiation, which we know already and then even ask question what is beyond exponentiation and tetration... Since we do not know those higher functions as well as addition and multiplication we are confused over what notation means in the first place.

So let me fill in a serious gap by a few examples, for the chain is longer by 1:

Very similar for multiplication, but here the chain is not longer.

So technically all higher operations can be represented through addition if we want to. Since it is exponentiation that is the last operation that we have far more experience than with tetration, we do express higher operations using exponentiation. We do not express exponentiation through addition, although in order to understand the higher operations we should do it as an exercise.

This last is showing how it goes on the upper levels. So where is hidden. It is on the grouping. Addition is on the level , so we have one group of elements that is raising the level and another group of elements

However, exponentiation is not associative so it creates some trouble when we try to explain tetration as we have to keep parenthesis.

This is to say:

And now

which means that previous operation (multiplication) is applied times. Which is giving

So let us try to create a multiplicative structure as with addition and multiplication previously


So first we have

Next this is repeated four times

Next again four times

So first we have

and again four times

This last example is clearly showing what is happening with higher levels beyond exponentiation. So in some sense addition and multiplication are too simple to explain the complexity. — Preceding unsigned comment added by 158.248.76.166 (talk)

Please see our core content policy WP:OR, and our behavioral policy WP:NOTFORUM. --JBL (talk) 13:50, 24 July 2021 (UTC)[reply]

"Ennation" listed at Redirects for discussion[edit]

An editor has identified a potential problem with the redirect Ennation and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 May 8#Ennation until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 1234qwer1234qwer4 10:31, 8 May 2022 (UTC)[reply]

Fractional extension[edit]

If we define tetration via the Kneser method, build up pentation via the Kneser method, and so on, would we be able to interpolate and find fractional values for hyperoperations, such as 2[1.5]3? Kwékwlos (talk) 11:34, 22 March 2023 (UTC)[reply]

I conjecture that 3[0.5]3 is approx. 5.4, 3[1.5]3 is approx. 7, and 3[2.5]3 is approx. 13. See https://commons.wikimedia.org/wiki/File:Hyperoperation_3_and_3_with_real_number.svg and https://commons.wikimedia.org/wiki/File:Hyperoperation_3_and_n_with_real_number.svg. Kwékwlos (talk) 12:52, 28 March 2023 (UTC)[reply]
This is not a forum for conducting original research. The article's content should reflect what is written in reliable sources. --JBL (talk) 17:21, 28 March 2023 (UTC)[reply]
I managed to dig up a reliable source (https://www.hindawi.com/journals/mpe/2016/4356371/) that tries to construct a[3/2]b using the arithmetic-geometric mean. But I don't see how this could be a practical way of extending hyperoperations to non-integer values. Kwékwlos (talk) 11:50, 22 April 2023 (UTC)[reply]
I wonder why we don't go into negatives (like 4[-1]5) since we are already talking about numbers that are not strictly "positive integers" Taureonn (talk) 23:14, 7 December 2023 (UTC)[reply]

Question about Modulus[edit]

I may be completely wrong about this, but I was under the impression that Modulus was an inverse of Multiplication, similarly to how Logarithms are an inverse of Exponentiation; Should I be correct about this, shouldn't Modulo warrant attention in the article and in the Hyperoperations template? ThrowawayEpic1000 (talk) 21:41, 8 December 2023 (UTC)[reply]