top of page
Search
yaroslavgrishin899

A First Course In Probability So



I am taking a Computer Science class soon that requires a solid knowledge of the basics of probability. I've only had minimal exposure to probability in classes I've taken in the past, so I need to get up to speed quickly. Can anyone recommend some good self-study resources (e.g. books, online classes, web sites) that I could use to teach myself the fundamentals?


An excellent introductory probability book for self-study is Henk Tijms, Understanding Probability (Cambridge University Press, 2nd ed., 2007). It distinguishes itself from other introductory probability texts by its emphasis on how probability works and how to use it.




A First Course In Probability So



A popular resource now is Joe Blitzstein's Stat 110 course in iTunes U (based on Harvard's probability course). Very nice video lectures and exams I took it fully (doing all problem sets, exams and etc.) and really learned a lot. Plus, you don't need to spend a cent.


There are many techniques in Machine Learning that rely on how likely an event is to occur. You might want to know if two things relate in some way; perhaps when event $A$ occurs, $B$ generally follows. Or you might want to simplify your model, allowing a certain amount of error, in which case you need to know how likely an event is. All of this is part of the field of probability and statistics.


  • This course space end date is set to 19.02.2020 Search Courses: MS-A0503 Home Courses School of Science department of... ms-a0503 - fi... Syllabus Topic outlineGeneralGeneralLecturer: Jukka Kohonen. Head assistant: Hoa Ngo. In questions relating to the exercise sessions, contact the head assistant or the assistant of your exercise group.This MyCourses page will host lecture slides (updated as the course progresses), exercise assignments, and solutions.The reading materials for the course are:Textbook: S. Ross (2014): Introduction to Probability and Statistics for Engineers and Scientists. Available here as an e-book via the Aalto library. You may need to log in with your Aalto account.

  • Lecture slides, see the Lectures section.

Notice.Feb 19 exam points are in the Results section.Re-exam information. A re-exam for courses MS-A050x (x=1,2,3,4) is held on 29.5. at 9-13 as a remote exam in MyCourses. (Note that the exam is 1 hour longer than Oodi says.) Registration in WebOodi by 22.5. Read the exam instructions in advance from the exam page. To access the exam page, you need a key that has been sent by e-mail to all who have registered to the exam. Exercise points are valid from periods III and IV.If you registered but still cannot access the exam page, send at once e-mail to joona.karjalainen@aalto.fi, jukka.kohonen@aalto.fi (send to both).ForumAnnouncements ForumForumGeneral discussion ForumPageGrading / Requirements for passing the course PageThe course can be passed by either (a) taking the exam or (b) by taking the exam and attending the exercise classes. More details can be found here.


Note that the first exercise session (1A) is already before the first lecture, for most groups. It is therefore highly advisable to consult the reading material before session (i.e. Ross's textbook, chapter 3).


In the so-called unconditional version of the Monty Hall problem, the probability of winning is 2/3 if you switch. You can play this version of the game here. The posted article discusses, among other things, the conditional version of the problem. In this version, the probability could be a number from 1/2 to 1. You can also see Wikipedia for more information on the problem.


It is known that people often misperceive randomness and probabilities. Remember the birthday problem? Another famous example is that of runs or streaks in independent trials, which are much more common than many people believe. For example, the probability of a run of three or more heads in 10 independent tosses of a fair coin is about 0.5. In one of the posted articles, it is shown how probabilities of such runs can be computed.


Do you know what card counting is? Do you know that this has little to do with memorizing cards? Do you know who invented it? Do you know that the same person is considered one of the first quants (of Wall Street)?


There is a fascinating interview with the inventor of card counting (Ed Thorp) and Scott Patterson, the author of the book The Quants, here. You can read more about card counting on Wikipedia or just by googling it. Perhaps as a possible reference for your future, there is even a serious probability book on the subject of gambling The Doctrine of Chances: Probabilistic Aspects of Gambling by Stewart N. Ethier, Springer, 2010.


Coincidences abound in everyday life. They delight, confound, and amaze us. They are disturbing and annoying. Coincidences can point to new discoveries. They can alter the course of our lives; where we work and at what, whom we live with, and other basic features of daily existence often seem to rest on coincidence.


Overview: This course is a solid introduction to the formulation and manipulation of probability models, leading up to proofs of limit theorems: the law of large numbers and the central limit theorem. It is a gateway course to serious study of mathematical statistics and graduate-level applied statistics. Key topics characterizing this course as opposed to more elementary introductions to Probability include joint distributions and change-of-variable formulas for them; conditional expectation and its applications; and the formal proofs of limit theorems.


Course requirements and Grading: there will be graded homework sets (one every 2 weeks, 5 altogether) which together will count 20% of the course grade; (two or three) 15-minute quizzes in-class that will together count 10%; 2 tests that will count 20% each; and a final exam that will count 30%.


Notes and Guidelines. (a) Homeworks should be handed in as hard-copy in-class, except for possible occasional due-dates on Fridays when you may submit them electronically, via email, in pdf format. A percentage deduction (at least 15%) of the overall HW score will generally be made for late papers. (b) Some of the problems will have answers in the back of the book, and some will be similar to problems that can be found in the 9th or earlier editions of the book. You may of course use such information as aids, but your submitted HW solutions must show all work, self-contained, to get full credit.Homework 1 (due Friday 9/6, 5pm): Reading is all of Chapter 1 of 10th edition. Problems, pp.16-17: #8 (b)-(c), 10, 18. Theoretical Problems, pp. 18-19: #8, 11. Also: calculate how many poker hands there are with (a) 1 pair (i.e. pattern xxyzw of card values), (b) three of a kind (pattern xxxyz), and (c) 2 pair (pattern xxyyz).


Homework 2 (due Wednesday 9/25, 5pm; 11 problems in all): Reading consists of Chapters 2, 3 and 4 through 4.8.3. Ch.2 Problems, pp.50-54: #7, #27, #32. [But in #27, first find the requested probability and then also find the probability of the event that A is the first to select the red ball and that this happens at A's 3rd pick.] Ch.3 Problems, pp.103-112: #19, #23, #54. Note that we are not assuming anything about any player's winning a game other than player A. But we are assuming that the events of winning separate games are all (jointly) independent. Ch.4 Problems, pp.175-179: #7, 48.


(7.3.a) Gambles are independent, and each one results in the player being equally likely to win or lose $1$ unit. Let $W$ denote the net winnings of a gambler whose strategy is to stop gambling immediately after his first win. Find $\prW>0$.


Alternative solution: With probability $\frac12$, we win immediately and get a dollar. With probability $\frac12$, we lose the first game, lose a dollar, and the game effectively starts all over. When the game starts all over, we can expect to win $\evwW$ dollars. That is, when we lose the first game, we get $-1+\evwW$ dollars. Hence


(7.10.a) Consider $3$ trials, each having the same probability of success. Let $X$ denote the total number of successes in these trials. If $\EX=1.8$, what is the largest possible value of $\prX=3$? Construct a probability scenario that results in $\prX=3$ having the stated value.


Probability scenario: $10$ balls are in an urn. $6$ are red and $4$ are green. We are to pick $3$ balls without replacement. Success is picking a red ball. Whatever color we pick on the first trial, we remove the balls of the other color from the urn for the second and third trials. So if we pick a red ball on the first trial, then we remove all of the green balls for the remaining trials. And vice versa.


Since $\cpX_3=1X_1=1=1$, it is sufficient to show that $\setX_1=1,X_2=1=\setX_1=1$. Clearly $\setX_1=1,X_2=1\subset\setX_1=1$. In the other direction, suppose $a\in\setX_1=1$. Then $a$ is an outcome where the first pick is a red ball. This is implies that the green balls are removed and hence $X_2=1$.


Note that $\setX=2\subset\setX\neq3$. Hence $1=\prX=2\leq\prX\neq3\leq1$. Hence $\prX\neq3=1$. Hence $\prX=3=0$ and this probability attains it smallest possible value. Also note that $\prX_i=1=0.6$ for $i=1,2,3$.


(7.11) Consider $n$ independent flips of a coin having probability $p$ of landing on heads. Say that a changeover occurs whenever an outcome differs from the one preceding it. For instance, if $n=5$ and the outcome is $HHTHT$, then there are $3$ changeovers. Find the expected number of changeovers.


To see this, define a success to be a stage where a white ball replaces a black ball. Then each success means $1$ less black ball in the urn. And there will be no more black balls in the urn exactly when you accumulate $m$ successes. Hence $X$ is the number of independent trials needed to accumulate $m$ successes where a success occurs with probability $1-p$. This matches with the definition of a negative binomial random variable with parameters $m$ and $1-p$. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page