The set of all possible values ​​of a random variable is called. Random value

The simplest form of setting this law is a table that lists the possible values ​​of a random variable and their corresponding probabilities.

Such a table is called the distribution series of the random variable X.


0 x 1 x 2 x 3 x 4 x 5 x 6

distribution function

The distribution law is a complete and exhaustive characteristic of a discrete random variable. However, it is not universal, since it cannot be applied to continuous random variables. A continuous random variable takes on an infinite number of values ​​that fill a certain gap. It is practically impossible to compile a table that includes all the values ​​of a continuous random variable. Therefore, for a continuous random variable, there is no distribution law, in the sense that it exists for a discrete random variable.

How to describe a continuous random variable?

For this, not the probability of the event X = x is used, but the probability of the event X<х, где х - некоторая переменная. Вероятность этого события зависит от х и является функцией х.

This function is called distribution function random variable X and is denoted F(x):

F(x)=P(X

The distribution function is a universal characteristic of a random variable. It exists for any random variables: discrete and continuous.

Distribution function properties:

1. When x 1 > x 2 F(x 1)> F(x 2)

2. F(-∞)=0

3. F(+∞)=1

The distribution function of a discrete random variable is a discontinuous step function, jumps occur at points corresponding to the possible values ​​of the random variable, and the probabilities of these values ​​are equal. The sum of these jumps is equal to one.

1 F(x)





Numerical characteristics of random variables.

The main characteristics of discrete random variables are:

distribution function;

distribution range;

for a continuous random variable:

distribution function;

distribution density.

Any law represents some function, and the specification of this function completely describes the random variable.

However, solving a series practical tasks it is not always necessary to characterize a random variable in full. It suffices to indicate only some numerical parameters characterizing the random variable.

Such characteristics, the purpose of which is to represent in a concentrated form the most significant features of the distribution, are called numerical characteristics of a random variable.

Position Characteristics

(MOV, mode, median)

Of all the numerical characteristics of random variables used, the characteristics describing the position of the random variable on the numerical axis are more often used, namely, they indicate some average value around which the possible values ​​of the random variable are grouped.

For this, the following characteristics are used:

· expected value;

the median.

The mathematical expectation (average value) is calculated as follows:

X 1 R 1 +x 2 R 2 +….+x n R n ∑ х i р i

р 1 + р 2 + …..+р n n

Given that ∑ p i , CAN equals M[X] = x i p i

The mathematical expectation of a random variable is the sum of the products of all possible values ​​of a random variable and the probabilities of these values.

The above formulation is valid only for discrete random variables.

For continuous quantities

M[X] = x f(x)dx, Where f(x) - distribution density X.

Exist various ways calculation of the average value. The most common forms of representing averages are arithmetic mean, median and mode.

The arithmetic mean is obtained by dividing the total value this sign for the entire homogeneous statistical population by the number of units of this population. To calculate the arithmetic mean, the formula is used:

Хср = (Х1+Х2+... +Хn):n,

where Xi is the value of the attribute of the i-th unit of the population, n is the number of units of the population.

Fashion random variable is called its most probable value.


M


Median the value located in the middle of the ordered row is called. For an odd number of units in the series, the median is unique and is located exactly in the middle of the series; for an even number, it is defined as the average of two adjacent units of the population occupying a middle position.

Statistics is a branch of science that studies the quantitative side of mass phenomena public life consisting of separate elements, units. The combination of elements constitutes a statistical population. The purpose of the study is to establish quantitative patterns of development of this phenomenon. It is based on the application of probability theory and the law of large numbers. The essence of this law lies in the fact that, despite the individual random fluctuations of individual elements of the population, a certain regularity is manifested in the total mass, which is characteristic of this population as a whole. The greater the number of single elements characterizing the phenomenon under study is considered, the more clearly the regularity inherent in this phenomenon is revealed.

Crime is a social, mass phenomenon, it is a statistical set of numerous facts of single criminal manifestations. This gives grounds to apply the methods of the theory of statistics for its study.

In statistical studies of social phenomena, three stages can be distinguished:

1) statistical observation, i.e. collection of primary statistical material;

2) summary processing of the collected data, during which the results are calculated, summary (summary) indicators are calculated and the results are presented in the form of tables and graphs;

3) analysis, during which the regularities of the studied statistical population, the relationship between its various components are revealed, a meaningful interpretation of generalizing indicators is carried out.

The first stage of statistical research is statistical observation. It plays a special role, since the errors made in the process of data collection are almost impossible to correct at further stages of work, which ultimately leads to incorrect conclusions about the properties of the phenomenon under study, their incorrect interpretation.

According to the method of recording facts, statistical observation is divided into continuous and discontinuous. Under continuous, or current, is understood such observation, in which the establishment and identification of facts is carried out as they arise. In discontinuous observation, facts are recorded either regularly at certain intervals, or as needed.

According to the coverage of the units of the surveyed population, continuous and non-continuous observation are distinguished. A continuous observation is one in which all units of the studied population are subject to accounting. So, for example, the registration of crimes is theoretically a continuous observation. However, in practice, a certain part of the crimes, called latent ones, remains outside the statistical totality under study, and therefore, in fact, such an observation is not continuous. A discontinuous observation is one in which not all units of the studied population are subject to registration. It is divided into several types: observation of the main array, selective observation, and some others.

Observation of the main array (it is sometimes called the imperfect continuous method) is a type of non-continuous observation in which, out of the entire set of units of the object, such a part of them is observed that constitutes the overwhelming, predominant share of the entire set. Observation by this method is practiced in those cases where the continuous coverage of all units of the population is associated with particular difficulties, and at the same time, the exclusion from the observation of a certain number of units does not significantly affect the conclusions about the properties of the entire population. Therefore, the registration of crimes can rather be attributed to this type of observation.

The most perfect type of non-continuous observation is selective, in which, in order to characterize the entire population, only a certain part of it is subjected to examination, however, taken on a sample according to certain rules. The main condition for the correctness of the sampling observation is such a selection, as a result of which the selected part of the units, according to all the characteristics to be studied, would accurately characterize the entire population as a whole. Most often, selective observation is used in the course of sociological research. In the future, we will consider the rules and methods for selecting units during selective observation.

After the primary material is collected and verified, the second stage of the statistical study is carried out - a summary. Statistical observation provides material that characterizes individual units of the object of study. The task of the summary is to summarize, systematize and generalize the results of the observation so that it becomes possible to identify the characteristic features and essential properties, to discover the patterns of the studied phenomena and processes.

The simplest example of a summary is the summation of all reported crimes. However, such a generalization does not give a complete picture of all the properties of the criminogenic situation. In order to characterize crime deeply and comprehensively, it is necessary to know how the total number of crimes is distributed by type, time, place and method of committing, etc.

The distribution of units of the object under study into homogeneous groups according to their essential characteristics is called statistical grouping. Objects studied by statistics are usually characterized by many properties and relationships expressed by various features. Therefore, the grouping of the examined objects can be carried out depending on the objectives of the statistical study according to one or more of these features. Thus, the personnel of the body can be grouped by positions, special ranks, age, length of service, marital status, etc.

As a result of processing and systematization of primary statistical materials, series of digital indicators are obtained that characterize certain aspects of the studied phenomena or processes or their change. These rows are called statistical. According to their content, statistical series are divided into two types: distribution series and dynamics series. Distribution series are series that characterize the distribution of units of the initial population according to any one attribute, the varieties of which are arranged in a certain order. For example, the distribution of the total number of crimes by certain types, the number of all personnel by positions are the distribution series.

Dynamic series are series that characterize the change in the size of social phenomena over time. A detailed consideration of such series and their use in the analysis and forecast of the criminal situation is the subject of a separate lecture.

The results of statistical observation and summaries of its materials are expressed primarily in absolute values ​​(indicators). Absolute values ​​show the dimensions of a social phenomenon under given conditions of place and time, for example, the number of crimes committed or the number of persons who committed them, the actual number of personnel or the number of vehicles. Absolute values ​​are divided into individual and total (i.e. total). Absolute values ​​are called individual, expressing the size of quantitative characteristics of individual units of a particular set of objects (for example, the number of victims or material damage in a particular criminal case, the age or length of service of a given employee, his salary, etc.). They are obtained directly in the process of statistical observation and are recorded in primary accounting documents. Individual absolute values ​​are the basis of any statistical study.

In contrast to individual total absolute values, they characterize the final value of a feature for a certain set of objects covered by statistical observation. They are obtained either by directly counting the number of units of observation (for example, the number of crimes of a certain type), or as a result of summing the values ​​of a feature for individual units of the population (for example, the damage caused by all crimes).

However, absolute values, taken by themselves, do not always give a proper idea of ​​the phenomena and processes under study. Therefore, along with the absolute values great importance in statistics have relative values.

Comparison is the main technique for evaluating statistical data and integral part all methods of their analysis. However, a simple comparison of two quantities is not enough to accurately assess their relationship. This ratio must also be measured. The role of the measure of such a ratio is performed by relative values.

Unlike absolute, relative values ​​are derived indicators. They are obtained not as a result of simple summation, but by relative (multiple) comparison of absolute values.

Depending on the nature of the phenomenon under study and the specific objectives of the study, relative quantities can have a different form (appearance) of expression. The simplest form of expressing a relative value is a number (whole or fractional), showing how many times one value is greater than the other, taken as the basis of comparison, or what part of it is.

Most often, in the analytical activities of the internal affairs bodies, a different form of representation of relative numbers is used, a percentage, in which the main value is taken as 100. To determine the percentage, it is necessary to multiply the result of dividing one absolute value by another (base) by 100.

An important role in the summary processing of statistical data belongs to the average value. Since each individual unit of the statistical population has individual characteristics, differing from any other quantitative value, to characterize the properties of the entire statistical population as a whole, average value . In statistics, the average value is understood as an indicator that reflects the level of a sign that changes in size per unit of a homogeneous population.

To characterize the homogeneity of the statistical population

various indicators are used according to the relevant attribute: variation, variance, standard deviation. These indicators make it possible to assess to what extent the corresponding average value reflects the properties of the entire population as a whole, whether it can generally be used as a generalizing characteristic of this statistical population. Detailed consideration of these indicators is an independent issue.

In a risk situation, we know the outcomes of a particular alternative and the probabilities with which these outcomes can occur. That is, we know the probability distribution of outcomes, so they can be represented (modeled) in the form random variable. In this section, we recall information from the theory of probability about random variables and methods for their determination, which will be necessary for further study of the material of the book.

According to the classical definition, a random value is a quantity whose value can change randomly from experience to experience. That is, in each "test" it can take one single value from a certain set. At the same time, it is impossible to predict what value it will take.

Random variables are divided into discrete and continuous. A discrete CV can only take on a finite or countable set of values. A continuous SW can take any value from some closed or open interval, including an infinite one.

3.2.2. Distribution law of a random variable

A random variable is determined by its distribution law. distribution law is considered set if:

  • the set of possible values ​​of a random variable (including infinite) and
  • the probability of a random variable falling into an arbitrary region of this set, or a law (formula) that allows calculating such a probability.

In fact, the probability is an indicator that characterizes the possibility of the occurrence of a random variable in a given area.

The most common and common way to determine probabilities different meanings random variable is the job probability distribution functions, which is abbreviated as distribution function.

The distribution function of a random variable X is the function F(x) , which sets the probability that the CV will take a value less than a specific value x, that is:

F(x) = P(X< x)

X ("x big") - denotes a random variable,

x ("x small") - a specific value from the set of possible values ​​of a random variable.

The distribution function is non-decreasing. When x tends to minus infinity, it tends to zero, and when x tends to plus infinity, it tends to one.

The form of representation of the law of distribution of a random variable can be different and depends on whether it is discrete or continuous.

The following dependencies follow from the definition of the distribution function:

the probability that a random variable will take values ​​in the interval from a to b:

P(a ≤ X< b) = F(b) - F(a)

the probability that a random variable will take values ​​not less than a:

3.2.3. Ways to represent the distribution of a discrete random variable

Discrete random variable can be completely specified by its distribution function or by a row (table) of distribution. They can be presented in tabular, analytical or graphical forms.

Suppose a random variable X can take three possible values ​​25 , 45 and 50 with probabilities of 25% , 35% and 40% respectively. The distribution series of this SW will look like this:

The distribution function of the same random variable, which shows the probability of not exceeding a specific value, can be written as follows:

Figure 3.1 presents graphical methods for setting the distribution law of this discrete random variable X .

Fig.3.1.

On the graph of the probability distribution series p j, the realizations of each possible value x j are represented by bars, the height of which is equal to the probability. The sum of the heights of all M bars (i.e. all probabilities) is equal to one, since they cover all possible x values:

Sometimes, instead of columns, a broken line is drawn, connecting the probabilities of realizing the values ​​of CB.

The probability that a discrete random variable will take a value less than a is equal to the sum of the probabilities of all outcomes less than a:

By definition, this is equal to the value of the distribution function at the point x = a. If we plot on the coordinate plane the values ​​of the distribution function as x "runs" through all values ​​from minus infinity to plus infinity, we get a plot of the distribution function. For a discrete SW it is stepped. On the interval from minus infinity to the first possible value x 1, it is equal to zero, since it is impossible to accept any value on this interval.

Further, each possible value of x j increases the distribution function by an amount equal to the probability of occurrence of this value p j . Between two consecutive values ​​of x j and x j+1, the distribution function does not change, since there are no other possible values ​​of x, and there are no jumps. Ultimately, at the point of the last possible value x M, there is a jump by the probability value p M , and the distribution function reaches the limit value equal to one. Further, the graph goes at this level parallel to the x-axis. It never rises higher, since the probability cannot be greater than one.

3.2.4. Ways to represent the distribution of a continuous random variable

Continuous random variable is also given by its distribution function, presented, as a rule, in an analytical form. In addition, it can be fully described by the probability density function f(x) , which is the first derivative of the distribution function F(x) :

Probability Density Function is non-negative, and its integral in infinite limits is equal to one.

Take as an example a continuous random variable distributed according to the normal law.

Its probability density function is given analytically by a formula of the form:

Here m X and σ X are distribution parameters. m X characterizes the location of the distribution center, and σ X - dispersion relative to this "center".

The random variable is called discrete, if the set of all its possible values ​​is a finite or infinite, but necessarily countable set of values, i.e. such a set, all elements of which can be (at least theoretically) numbered and written out in the appropriate sequence.

Such random variables listed above, such as the number of points that fall out when throwing a dice, the number of pharmacy visitors during the day, the number of apples on a tree, are discrete random variables.

The most complete information about a discrete random variable is given by distribution law this value - it is a correspondence between all possible values ​​of this random variable and their corresponding probabilities.

The law of distribution of a discrete random variable is often set in the form of a two-line table, the first line of which lists all possible values ​​​​of this variable (in ascending order), and the second line lists the probabilities corresponding to these values:

X x 1 x 2 x n
P p 1 p 2 p n

Since all possible values ​​of a discrete random variable represent a complete system, the sum of probabilities is equal to one ( normalization condition):

Example 4 There are ten student groups with 12, 10, 8, 10, 9, 12, 8, 11,10 and 9 students respectively. Write a distribution law for a random variable X, defined as the number of students in a randomly selected group.

Solution. The possible values ​​of the considered random variable X (in ascending order) are 8, 9, 10, 11, 12. The probability that there will be 8 students in a randomly selected group is equal to

Similarly, you can find the probabilities of the remaining values ​​of the random variable X:

Thus, the desired distribution law:

X
P 0,2 0,2 0,3 0,1 0,2

The distribution law of a discrete random variable can also be specified using a formula that allows for each possible value of this variable to determine the corresponding probability (for example, the Bernoulli distribution, the Poisson distribution). To describe certain features of a discrete random variable, use its basic numerical characteristics: mathematical expectation, variance and standard deviation (standard).

mathematical expectation M (X) (the notation "μ" is also used) of a discrete random variable x is the sum of the products of each of all its possible values ​​​​by the corresponding probabilities:

The main meaning of the mathematical expectation of a discrete random variable is that it represents average value given value. In other words, if a certain number of tests have been performed, the results of which have found the arithmetic mean of all observed values ​​of a discrete random variable X, then this arithmetic mean is approximately equal (the more accurate, the more more quantity tests) the mathematical expectation of a given random variable.

Let us present some properties of mathematical expectation.

1. The mathematical expectation of a constant value is equal to this constant value:

M(S)=S

2. The mathematical expectation of the product of a constant factor by a discrete random variable is equal to the product of this constant factor by the mathematical expectation of a given random variable:

М(kX)=kM(X)

3. The mathematical expectation of the sum of two random variables is equal to the sum of the mathematical expectations of these variables:

M(X+Y)=M(X)+M(Y)

4. The mathematical expectation of the product of independent random variables is equal to the product of their mathematical expectations:

M(XY)=M(X)M(Y)

Separate values ​​of a discrete random variable are grouped around the mathematical expectation as the center. To characterize the degree of spread of possible values ​​of a discrete random variable relative to its mathematical expectation, the concept is introduced dispersion of a discrete random variable.

dispersionD(X) (the notation "σ 2 " is also used) of a discrete random variable X is the mathematical expectation of the square of the deviation of this quantity from its mathematical expectation:

D(X)=σ 2 =M((X - μ) 2),(11)

In practice, it is more convenient to calculate the variance by the formula

D (X) \u003d σ 2 \u003d M (X 2) - μ 2, (12)

Let us list the main properties of the dispersion.

  1. The dispersion of a constant value is zero:
  1. The variance of any random variable is a non-negative number:

D(X)≥0

  1. The variance of the product of a constant factor k by a discrete random variable is equal to the product of the square of this constant factor and the variance of the given random variable:

D(kX)=k 2 D(X).

In computational terms, it is not the variance that is more convenient, but another measure of the dispersion of a random variable X, which is the most commonly used standard deviation(standard deviation or simply standard).

Standard deviation discrete random variable is called the square root of its variance:

The convenience of the standard deviation is that it has the dimension of the random variable itself X, while the variance has a dimension representing the square of the dimension x.

End of work -

This topic belongs to:

Elements of Probability Theory

Scientific and methodological substantiation of the topic .. probability theory studies the patterns that manifest themselves in the study of such .. many random events can be quantified by random variables that take values ​​in ..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material turned out to be useful for you, you can save it to your page on social networks:

Definition. A random variable is a numerical value, the value of which depends on which elementary outcome occurred as a result of an experiment with a random outcome. The set of all values ​​that a random variable can take is called the set of possible values ​​for this random variable.

Random variables denote: X, Y 1, Z i; ξ , η 1, μ i, and their possible values ​​are x 3, y 1k, zij.

Example. In the experiment with a single throw of a dice, the random variable is the number X dropped points. Set of possible values ​​of a random variable X has the form

{x 1 \u003d 1, x 2 \u003d 2, ..., x 6 \u003d 6}.

We have the following correspondence between elementary outcomes ω and values ​​of the random variable X:

That is, for each elementary outcome ω i, i=1, …, 6, is assigned a number i.

Example. The coin is tossed until the first appearance of the "coat of arms". In this experiment, you can enter, for example, the following random variables: X- the number of throws before the first appearance of the "coat of arms" with many possible values ​​( 1, 2, 3, … ) And Y- the number of "digits" that fell out before the first appearance of the "coat of arms", with many possible values {0, 1, 2, …} (it's clear that X=Y+1). In this experiment, the space of elementary outcomes Ω can be identified with many

{G, CG, CG, …, C…CG, …},

and the elementary outcome ( Ts … TsG) is assigned to the number m+1 or m, Where m- the number of repetitions of the letter "C".

Definition. scalar function X(ω), given on the space of elementary outcomes, is called a random variable if for any x ∈ R (ω:X(ω)< x} is an event.

Distribution function of a random variable

To study the probabilistic properties of a random variable, it is necessary to know the rule that allows you to find the probability that a random variable will take a value from a subset of its values. Any such rule is called the law of probability distribution or the distribution of a random variable.

The general distribution law inherent in all random variables is the distribution function.

Definition. Distribution function (probabilities) of a random variable X call the function F(x), the value of which is at the point x equal to the probability of the event (X< x} , that is, an event consisting of those and only those elementary outcomes ω , for which X(ω)< x :

F(x) = P(X< x} .

It is usually said that the value of the distribution function at a point x is equal to the probability that the random variable X takes on a value less than x.

Theorem. The distribution function satisfies the following properties:

A typical form of the distribution function.

Discrete random variables

Definition. Random variable X is called discrete if the set of its possible values ​​is finite or countable.

Definition. Near distribution (probabilities) of a discrete random variable X call a table consisting of two lines: the top line lists all possible values ​​​​of a random variable, and the bottom line lists the probabilities p i =P\(X=x i \) that the random variable takes these values.

To check the correctness of the table, it is recommended to sum the probabilities pi. By virtue of the axiom of normalization:

Based on the distribution series of a discrete random variable, one can construct its distribution function F(x). Let X- , given by its distribution series, and x 1< x 2 < … < x n . Then for all x ≤ x 1 event (X< x} is impossible, therefore, by definition F(x)=0. If x 1< x≤ x 2 , then the event (X< x} consists of those and only those elementary outcomes for which X(ω)=x 1. Hence, F(x)=p 1. Similarly, when x2< x ≤ x 3 event (X< x} consists of elementary outcomes ω , for which either X(ω)=x 1, or X(ω)=x2, that is (X< x}={X=x 1 }+{X=x 2 } . Hence, F(x)=p1 +p2 etc. At x > xn event (X< x} sure, then F(x)=1.

The distribution law of a discrete random variable can also be specified analytically in the form of some formula or graphically. For example, the distribution of a dice is described by the formula

P(X=i) = 1/6, i=1, 2, …, 6.

Some Discrete Random Variables

Binomial distribution. Discrete random variable X distributed according to the binomial law if it takes the values ​​0, 1, 2, ..., n in accordance with the distribution given by the Bernoulli formula:

This distribution is nothing but the distribution of the number of successes X V n tests according to the Bernoulli scheme with a probability of success p and failure q=1-p.

Poisson distribution. Discrete random variable X distributed according to the Poisson law if it takes non-negative integer values ​​with probabilities

Where λ > 0 is the Poisson distribution parameter.

The Poisson distribution is also called the law of rare events, since it always appears where a large number of trials are performed, in each of which a "rare" event occurs with a small probability.

In accordance with Poisson's law, distributed, for example, the number of calls received during the day at the telephone exchange; the number of meteorites that fell in a certain area; the number of decayed particles in the radioactive decay of matter.

Geometric distribution. Consider the Bernoulli scheme again. Let X is the number of trials to be done before the first success occurs. Then X- discrete random variable taking values ​​0, 1, 2, …, n, … Determine the probability of an event (X=n).

  • X=0, if the first trial succeeds, therefore, P(X=0)=p.
  • X=1 If the first trial fails and the second succeeds, then P(X=1)=qp.
  • X=2, if in the first two trials - failure, and in the third - success, then P(X=2)=q 2 p.
  • Continuing the procedure, we get P(X=i)=q i p, i=0, 1, 2, …

      A random variable with such a distribution series is called distributed according to a geometric law.

RANDOM VALUES

One of the most important concepts of probability theory (along with a random event and probability) is the concept of a random variable.

Definition. By a random variable I understand a variable that, as a result of an experiment, takes on one or another value, and it is not known in advance which one.

Random variables (abbreviated r.v.) are denoted by capital Latin letters X, Y, Z,… (or lowercase Greek letters x (xi), h(eta), q (theta), y(psi), etc.) and their possible values ​​in the corresponding lowercase letters X,at,z.

Examples of r.v. can serve as: 1) the number of born boys among a hundred newborns is a random variable that has the following possible values: 0, 1, 2, ..., 100;

2) the distance that the projectile will fly when fired from the gun is a random variable. Indeed, the distance depends not only on the installation of the sight, but also on many other factors (strength and direction of the wind, temperature, etc.) that cannot be fully taken into account. Possible values ​​of this quantity belong to a certain interval ( A, b).

3) X- the number of points that appear when throwing a dice;

4) Y- the number of shots before the first hit on the target;

5) Z– device uptime, etc. (human height, dollar exchange rate, number of defective parts in a batch, air temperature, player's payoffs, coordinate of a point if it is randomly chosen on , company's profit, ...).

In the first example, the random variable X could take one of the following possible values: 0, 1, 2, . . ., 100. These values ​​are separated from each other by gaps in which there are no possible values X. Thus, in this example, the random variable takes on separate, isolated possible values. In the second example, the random variable could take any of the interval values ​​( A, b). Here it is impossible to separate one possible value from another by a gap that does not contain possible values ​​of the random variable.

Already from what has been said, we can conclude that it is expedient to distinguish between random variables that take only separate, isolated values, and random variables whose possible values ​​completely fill a certain gap.

Definition. Discrete(discontinuous) is a random variable (abbreviated d.r.v.), which takes on separate, countable possible values ​​with certain probabilities. The number of possible values ​​of a discrete random variable can be finite or infinite.

Definition. If the set of possible values ​​of r.v. uncountable, then such a quantity is called continuous(abbreviated n.s.v.). A continuous random variable can take on all values ​​from some finite or infinite interval. Obviously, the number of possible values ​​of a continuous random variable is infinite.



random variables X And Y(examples 3 and 4) are discrete. S.v. Z(example 5) is continuous: its possible values ​​belong to the interval )