# Novelty and collective attention

##### Clone this wiki locally

 The observations can be described by a dynamical model characterized by a single novelty factor. Our measurements indicate that novelty within groups decays with a stretched-exponential law, suggesting the existence of a natural time scale over which attention fades.


## Log-normal distribution

we measured the histogram of the final diggs of all 29,864 popular stories in the year 2006. As can be seen from Fig. 1, the distribution appears to be quite skewed, with the normal Q–Q plot of $log(N_\infty)$ a straight line. A Kolmogorov–Smirnov normality test of $log(N_{\infty})$ with mean 6.546 and standard deviation 0.6626 yields a P value of 0.0939, suggesting that $N_{\infty}$ follows a log-normal distribution.

$N_t$, the number of diggs of a popular story after finite time t. The distribution of $log(N_t)$ again obeys a bell-shaped curve. As an example, a Kolmogorov–Smirnov normality test of $log(N_{2h})$ with mean 5.925 and standard deviation 0.5451 yields a P value as high as 0.5605, supporting the hypothesis that $N_t$ also follows a log-normal distribution.

# A simple stochastic dynamical model

• $N_t$ represents the number of people who know the story at time t, and a fraction $\mu$ of those people will further spread the story to some of their friends.
• Mathematically, this assumption can be expressed as $Nt = (1 + X_t)N_{t-1}$, where X1, X2, . . . are positive, independent, and identically distributed random variables with mean $\mu$ and variance $\sigma^2$.
• This growth in time is eventually curtailed by a decay in novelty, which we parameterize by a time-dependent factor $r_t$, consisting of a series of decreasing positive numbers with the property that $r_1 = 1$ and $r_t \rightarrow 0$ , as $t \rightarrow \infty$.
• With this additional parameter, the full stochastic dynamics of story propagation is governed by $N_t = (1 + r_t X_t)N_{t-1}$, where the factor $r_t X_t$ acts as a discounted random multiplicative factor.
• Put together, we have $N_t = \prod_{s = 1}^{t}(1 + r_s X_s)N_0$
• When $X_t$ is small (which is the case for small time steps), we have the following approximate solution:
$N_t = \prod_{s = 1}^{t}(1 + r_s X_s)N_0 \approx \prod_{s = 1}^{t} e^{r_s X_s} N_0 = e^{\sum_{s = 1}^{t} r_s X_s} N_0$ [1]

Because when x is small, there exists $1 + x \approx e^x$.

• Taking logarithm of both sides, we obtain
$log N_t - log N_0 = \sum_{s = 1}^{t} r_s X_s$ [2]

## Mean and Variance

• taking the mean and variance of both sides for equation 2:
$\frac{E(log N_t - log N_0)}{var(log N_t - log N_0)} = \frac{\sum_{s = 1}^{t} r_s \mu}{\sum_{s = 1}^{t} r_s \sigma^2} = \frac{\mu}{\sigma^2}$ [3]
 问题：如何推导出公式【3】？


### Product of independent variables

If two variables X and Y are independent, the variance of their product is given by[2]

\begin{align} \operatorname{Var}(XY) &= [E(X)]^2 \operatorname{Var}(Y) + [E(Y)]^2 \operatorname{Var}(X) + \operatorname{Var}(X)\operatorname{Var}(Y). \end{align}

Equivalently, using the basic properties of expectation, it is given by

$\operatorname{Var}(XY) = E(X^2) E(Y^2) - [E(X)]^2 [E(Y)]^2.$

Now if X and Y are independent, then by definition j(x,y) = f(x)g(y) where f and g are the marginal PDFs for X and Y. Then

\begin{align} \operatorname{E}[XY] &= \iint xy \,j(x,y)\,\mathrm{d}x\,\mathrm{d}y = \iint x y f(x) g(y)\,\mathrm{d}y\,\mathrm{d}x \\ & = \left[\int]\left[\int] = \operatorname{E}[X]\operatorname{E}[Y] \end{align}

and Cov(X, Y) = 0.

Observe that independence of X and Y is required only to write j(x, y) = f(x)g(y), and this is required to establish the second equality above. The third equality follows from a basic application of the Fubini-Tonelli theorem.

If the model is correct, a plot of the sample mean versus the sample variance for each time t should yield a straight line passing through the origin with slope $\frac{\mu}{\sigma^2}$ . 如上图Fig2所示。

## Computing decay factor

The decay factor $r_t$ can now be computed explicitly from $N_t$ up to a constant scale. By taking expectation values of Eq. 2 and normalizing r1 to 1, we have

$r_t = \frac{E(log N_t) - E(log N_{t-1})}{E(log N_1) - E(log N_0)}$ [4]

 问题：如何推导出公式【4】？


$log N_t - log N_{t-1} = r_t Xt$ [5]

$log N_1- log N_9 = r_1 X1$ [6]

$E(r_t) = \frac{E(log N_t) - E(log N_{t-1})}{E(log N_1) - E(log N_0)}$

## Stretched exponential relaxation

1. The curve of $r_t$ estimated from the 1,110 stories in January 2007 is shown in Fig. 3a. As can be seen, $r_t$ decays very fast in the first 2–2 hours, and its value becomes 0.03 after 3 hours.
2. Fig. 3 b and c shows that $r_t$ decays slower than exponential and faster than power law.
3. Fig. 3d shows that $r_t$ can be fit empirically to a stretched exponential relaxation or Kohlrausch–Williams–Watts law[3]:
$r_t \sim e^{-0.4^{t^{0.4}}}$.

The half-life $\tau$ of $r_t$ can then be determined by solving the equation

$\int_{0}^{\tau} e^{-0.4^{t^{0.4}}} = \frac{1}{2} \int_{0}^{\infty} e^{-0.4^{t^{0.4}}}$

A numerical calculation gives 69 minutes, or 1 hour. This characteristic time is consistent with the fact that a story usually lives on the front page for a period between 1 and 2 hours.

## 数值模拟

### 1. python中的正态分布

from random import normalvariate
import matplotlib.pyplot as plt
import matplotlib.cm as cm

x = [normalvariate(0.5, 0.1) for i in range(500)]
plt.hist(x)

### 2. 随机动力学增长模型

def random_model(mean, sd):
Nt = {}
Nt[0] = 1
for t in range(1, 100):
xt = normalvariate(mean, sd)
Nt[t] = (1+xt)*Nt[t-1]
return Nt

fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',10)

for mean in np.linspace(0.1,0.9,10):
Nt = random_model(mean, 0.1)
plt.plot(Nt.keys(), Nt.values(),color=cmap(mean),linestyle='-',marker='.',label=str(np.round(mean,2)))
plt.yscale('log',basey=10)
plt.ylabel('log(Nt)'); plt.xlabel('t')
plt.legend(loc=2,fontsize=8)

plt.show()


### 3. 带有衰退的随机动力学增长模型

def random_model_with_decay(mean, sd, decay_prameter):
Nt = {}
Nt[0] = 1
for t in range(1, 100):
xt = normalvariate(mean, sd)
rt = np.e**(-(t**decay_prameter)) # make it simpler here
Nt[t] = (1+rt*xt)*Nt[t-1]
return Nt

fig = plt.figure(figsize=(12, 4),facecolor='white')
cmap = cm.get_cmap('rainbow_r',10)

for mean in np.linspace(0.5,0.9,1):
for dp in np.linspace(0.1, 0.5, 5):
Nt = random_model_with_decay(mean, 0.1, dp)
plt.plot(Nt.keys(), Nt.values(),
color=cmap(mean*dp),linestyle='-',marker='.',
label='Mean ='+str(np.round(mean,2))+' & Decay prameter = ' + str(dp))
plt.yscale('log',basey=10)
plt.ylabel('log(Nt)'); plt.xlabel('t')
plt.legend(loc=2,fontsize=8)

plt.show()


## 附：证明 $1 + x \approx e^x$

 返回 [[Collective Order]]


# 文献

Wu F, Huberman BA (2007) Novelty and collective attention. Proceedings of the National Academy of Sciences 104: 17599–17601.