Logarithmic growth in the context of "Exponential growth"

Play Trivia Questions online!

or

Skip to study material about Logarithmic growth in the context of "Exponential growth"

Ad spacer

⭐ Core Definition: Logarithmic growth

In mathematics, logarithmic growth describes a phenomenon whose size or cost can be described as a logarithm function of some input. e.g. y = C log (x). Any logarithm base can be used, since one can be converted to another by multiplying by a fixed constant. Logarithmic growth is the inverse of exponential growth and is very slow.

A familiar example of logarithmic growth is a number, N, in positional notation, which grows as logb (N), where b is the base of the number system used, e.g. 10 for decimal arithmetic. In more advanced mathematics, the partial sums of the harmonic series

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

👉 Logarithmic growth in the context of Exponential growth

Exponential growth occurs when a quantity grows as an exponential function of time. The quantity grows at a rate directly proportional to its present size. For example, when it is 3 times as big as it is now, it will be growing 3 times as fast as it is now.

In more technical language, its instantaneous rate of change (that is, the derivative) of a quantity with respect to an independent variable is proportional to the quantity itself. Often the independent variable is time. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). Exponential growth is the inverse of logarithmic growth.

↓ Explore More Topics
In this Dossier

Logarithmic growth in the context of Causes of global warming

The scientific community has been investigating the causes of current climate change for decades. After thousands of studies, the scientific consensus is that it is "unequivocal that human influence has warmed the atmosphere, ocean and land since pre-industrial times." This consensus is supported by around 200 scientific organizations worldwide. The scientific principle underlying current climate change is the greenhouse effect, which provides that greenhouse gases pass sunlight that heats the earth, but trap some of the resulting heat that radiates from the planet's surface. Large amounts of greenhouse gases such as carbon dioxide and methane have been released into the atmosphere through burning of fossil fuels since the industrial revolution. Indirect emissions from land use change, emissions of other greenhouse gases such as nitrous oxide, and increased concentrations of water vapor in the atmosphere, also contribute to climate change.

The warming from the greenhouse effect has a logarithmic relationship with the concentration of greenhouse gases. This means that every additional fraction of CO2 and the other greenhouse gases in the atmosphere has a slightly smaller warming effect than the fractions before it as the total concentration increases. However, only around half of CO2 emissions continually reside in the atmosphere in the first place, as the other half is quickly absorbed by carbon sinks in the land and oceans. Further, the warming per unit of greenhouse gases is also affected by feedbacks, such as the changes in water vapor concentrations or Earth's albedo (reflectivity).

↑ Return to Menu

Logarithmic growth in the context of Worst-case complexity

In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as n in asymptotic notation). It gives an upper bound on the resources required by the algorithm.

In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.

↑ Return to Menu