Entropy of Selection Procedures for Unequal Probability Sampling
Corresponding author: Abdul Basit, National College of Business Administration and Economics, Lahore, Pakistan, Email:firstname.lastname@example.org
Shannon  introduced the concept of information theory and gave the idea of information function & entropy measure. Information function is based on the logarithm of probability of an event. Entropy measure is the average of information function. In the literature different generalized entropy measures are available for the engineering sciences and reliability theory. Takahashi  addressed two main points regarding weighted probability of the events. The 1st main point is; probability of an event is non-linearly transformed into weighted probability which has concave and convex points. The 2nd main point he addressed; unknown probability distribution of the event.
Hansen and Hurwitz  firstly introduced the idea of unequal probability sampling early in forties. They gave the idea of unequal probability sampling with replacement. The theoretical framework of unequal probability sampling without replacement was introduced in early fifties. The estimator of population total proposed by Horvitz and Thompson  is:
Basit and Shahbaz , derived a general class of selection procedure for unequal probability sampling for sample size 2. The probability of inclusion for the i-th unit in the sample for this selection procedure is given as:
They compared the variances of Horvitz & Thompson  estimator using first selection procedure and compared the variances of Murthy  estimator using 2nd selection procedure. In both procedures, we have different selection procedures for the different values of α and β. In these procedures each pair of α and β provides a new selection procedure. Al-Jararha  also derived a class of selection procedure for sampling two units with probability proportional to size. He also compared the variance of Horvitz – Thompson Estimator.
Entropy of Selection Procedure
Entropy is a measure to check the spread or randomness of the sampling design. In simple words entropy is the average of the amount of information. Entropy has different definitions e.g. entropy as a measure of diversity, degree of randomness and measure of the amount of disorder in a system.
Shannon entropy for Basit and Shahbaz  selection procedure is:
In this section variance of Horvitz –Thompson estimator and Murthy estimator for both procedures has been calculated. Entropy of the each selection procedure has been calculated for different values of α and β. We used the values of α and β which were mentioned in Basit and Shahbaz . For the empirical study, an artificial population has been used. For the comparison of the variance of Horvitz-Thompson estimator, we assigned the rank of lowest variance is 1 and so on, similarly assigned rank 1 for the minimum variance of Murthy estimator.
Figure 1-3 express the rank of variance of Horvitz – Thompson estimator and Murthy estimators for both selection procedures. Figure 4–5 shows the trend of entropy for both selection procedures. Rank of Shannon entropy for both procedures are high and variance of both estimators are smaller for α = -1, 1 and 2.
From the empirical study it is concluded that both selection procedures has the higher entropy for the values of α = -1, 1, 2 and any value of β. Variance of Horvitz – Thompson estimator and Murthy estimator for both procedures has minimum rank for the values of α = 1, 2 and any value of β. We found that entropy of selection procedures and the variance of estimators has the inverse relationship.
The authors’ special thanks to editor and reviewer for their encouraging comments which led to improve the quality of the article.
4. Murthy MN. Ordered and unordered estimators in sampling without replacement. Sankhya. 1957, 18(3-4): 379–390. 5. Shannon CE. A mathematical theory of communication. Bell System Tech Journal. 1948, 27:379-423.