Markov property : A discrete time Markov chain is a stochastic process {Xn } with finite state space S that satisfies the Markov property: P( Xn=xn ∣X0=x0 ,…, Xn−1=xn−1 )=P(Xn=xn ∣ Xn−1=xn−1) for all x0 ,…, xn∈S and n⩾1. The next step of a Markov chain is independent of the past and only depends upon the most recent state. Transition probabilities: Pij are the Markov chains …show more content…
the chance for a person in the ith position to move to the jth position remains constant with time: this is a time-homogenous Markov chain and it is only the value of i and j that changes the probabilities of transitions.
Introducing an indicator matrix I= Iij. This is an n×n matrix with Iij =1 if there is a link from individual i to j. If there exists no such link Iij =0.
Inlinks point towards a node and outlinks away from a node. The larger the number of inlinks, the more important a node is. The “weight” of an individual owing to his own degree of influence is proportionally distributed to the individuals he points to. This means that having an individual of higher degree of freedom follow you will increase your own degree of freedom.
The number of outlinks to an individual “i” is then the sum of the rows of the matrix I: Oi
The number of inlinks to an individual “i” is the sum of the columns: …show more content…
qi ϵ [0,1] as qi is a probability. Also Σ qi =1 . Recursive computation of D
The degree of influence of an individual i can then be defined as Di =∑_(j∈Ii)▒Dj/Oj
Di(0) = 1/n , which is the initial value of D. Through multiple iterations, Di(k+1) =∑_(j∈Ii)▒(Di^(k) )/Oj, with Di(k) being the D of I in iteration k.
Define a matrix T’ = qT. q is the probability that an individual chooses a “source” that provides information on the particular endorser (looking at a website/ reading an article in a magazine/ switching to a TV channel that is airing a commercial with the endorser). q will then be a 1xn matrix with the probabilities of each individual choosing a link to each of the existing members of the network. It is intuitive to see that an individual i with an existing high value of Di will have a high value of qi.
In the kth step, the probability vector will be qk = qTk
Over a finite number of iterations k, the value of qTk tends towards the unique stationary distribution π.
The individual with the largest value of qk is the one with the highest degree of