Mathematics for Computer Science revised Monday 5th June, , Eric Lehman. Google Inc. F Thomson Leighton. Department of Mathematics and the. Department of Electrical Engineering and Computer Science . This text explains how to use mathematical models and methods to analyze. Jury trial. Truth is ascertained by twelve people selected at random. Word of God. Truth is ascertained by communication with God, perhaps via a third party.
|Language:||English, Japanese, French|
|Genre:||Politics & Laws|
|ePub File Size:||19.62 MB|
|PDF File Size:||8.36 MB|
|Distribution:||Free* [*Sign up for free]|
This fall I will be teaching the required "Discrete Math for CS course" to about fifty students at the University of South Carolina. Previously I used. The aim of this book is to present some the basic mathematics that is needed by computer scientists. The reader is not expected to be a mathematician and we. Mathematics for informatics and computer science / Pierre Audibert. p. cm. Includes bibliographical references and index. ISBN 1. Computer.
As they say, 'small mercies'. I find it much easier to recall details of material I've read in a physical textbook rather than an e-book. The unique cover, weight, texture, etc. If I have a certain problem I need to revisit, I can find everything online. There are a few exceptions, of course, which is why I said that downloading books that one wants to keep for the future and also reference it in the future is fine. But should I really download books for general education that I have to take and has nothing to do with my major?
I held onto my books to have as references.
Of course, this was back when the ubiquitous Internet and "instant access to everything" was some years away. Why should I be reduced after leaving an education, to solely what I can remember in my head and what my current circumstances keep refreshed in my mind and memory?
Education was an enormous investment not just of money but also of time and effort.
It makes sense to me to keep the resources I used at hand; it helps keep my access to that education at hand. Seen that way, while book prices are ridiculous, that expense as compared to the entire expense not just in money but in time and effort , is relatively small. It's an investment I've made against the future, where the best recollection and use of my education may be paramount. Those prices are simply unfair. For a system that purports to provide its students tools for life -- not just formulae, but ways of thinking and being.
To then increase the burden on those students of taking with them the references they used and might wish to retain as part of retaining that education It seems that their actions in this matter oppose their rhetoric. Then again, even when prices weren't so high, I saw a lot of my fellow students simply discarding their books -- at the end of semester, not just after graduation. Many people do "move on" and live "in the present.
Living in the present, without the full benefit of history. And reacting based upon a more or less vague impression, rather than detailed knowledge. What, really, do our institutions want us to retain?
How do they want us to live our lives. As demonstrated through their actions -- e.
As for electronic copies. I find I read a lot better from printed pages than from screen. And, as others have pointed out, the "legitimate" electronic versions all too often come with electronic leases -- leashes -- with access deniable purposefully or inadvertently at any point in the future. My Discrete Math book is still as accessible to me, today -- with any particular personal notes of interest and focus I may have added -- as it was when I bought it.
I had to pay for it, and I've had to schlep it around. But I've also had to do that with my education, taking up premium space in my gray matter. I paid a lot more for the education; I hope and believe keeping the book handy helps me retain that larger value. And I still find that the good books provide me more useful and valuable access to their information than an ever more cluttered and noisy Internet -- for all that their are many gems -- resources and contributors -- on it.
These days, if you can only find them. The other perspective is that if tuition is many thousands of dollars a year, then the cost may be not that dramatic.
Yeah, when you need an access code you are out of luck. Thankfully many professors don't do that. That money can be spent for rent and other more high priority stuff.
More information about BookBoon: download older or used my friend: Margaret M. Honest question, why would you spend weeks doing proof by induction? Looking it up, I think I did that when I was 16, but I've never used it while programming. I can't see a single benefit to knowing it for programming.
Been programming professionally for 12 years now. What am I missing? Why do you think it's important? Any reasoning about loops or recursion requires an implicit understanding of proof by induction. Making it formally explicit seems like a perfectly fine idea. In some sorts of code, you can get a lot of mileage from "loop invariants" and "loop variants". Understanding these is more or less the same as understanding proof by induction.
Whether you are missing anything depends on whether you ever write the sort of code that benefits from this. Even if you do, you still might not be missing anything. You might be comfortable with loop invariants but not have connected them with proof by induction.
Or you might be good at reasoning about these things in other ways that don't involve invariants. You can stop reading here if you're already very familiar with loop variants and invariants. Suppose you're writing a binary search. Don't write a binary search. Use someone else's that's already had the bugs taken out. This is basically pretty simple but surprisingly easy to get subtly wrong, which means it's the sort of code it may be useful to write and document in such a way that you have an informal proof of its correctness.
Like this: Takes time at most proportional to log elements. Is it correct? Those variants and invariants 1 divide that question in two and 2 provide a strategy for answering the second half of the question.
First sub-question: Second sub-question: Strategy for second sub-question: The first invariant implies that since mid is always between lo inclusive and hi exclusive it's safe to access a[mid]. The second, given that it holds at both start and end of the loop body, implies that if we exit the loop then indeed x isn't in the array.
The only ways to leave the function are via the return in the middle, which only happens when we have explicitly found x, and via the return at the end, which only happens when x is known to be absent; therefore, if we return anything, we return the right thing.
Note that that last bit is more or less a proof by induction on array size that the function always returns. So if the invariants hold then the function does the right thing in the right amount of time. Proving that the invariants hold is the induction-like bit. So the first invariant always holds.
The second invariant holds at the start since lo,hi are bounds on the entire array. So the second invariant holds at the end of the loop -- and of course therefore at the start of the next iteration. Note that those were both proofs by induction, though I wasn't super-explicit about the fact. It's probably pretty clear what sort of code this is useful for: If you're writing a language's standard library, or implementing some sort of iterative mathematical algorithm, then you're quite likely to find it useful.
The less like that your code is, the less often this technique will be useful to you. Fairly extreme example: Slightly off topic, but maybe still interesting.
I used to write binary searches like yours, with two tests per iteration: If comparisons are cheap, as they are if you are searching an array of numbers in a compiled language, then it doesn't make much difference.
I'm afraid you're not very good at explaining yourself, I've no idea what point you're trying to make. Sorry to burst your bubble, but any moderately complex LOB CRUD app usually has a large amount of far more complex algos than a binary sort. In my experience, algos are the really easy part of programming. The hard part is managing complexity.
His point is that when you design an algorithm you need to be able to reason about why it works. Based on the rest of your comments in this thread, it sounds to me like you have something against the mathematical background of CS?
Writing code is not the same thing as doing computer science. Plenty of people are perfectly happy doing the former, but CS as an academic discipline is rooted in discrete math so any school that teaches it should also teach the background. Ah, the old "but it's not a programming degree, it's about theory". I thought we were past that these days. Notice how it took you just one sentence to summarise the few hundred words he wrote.
We've a fairly big problem in this industry that "qualified" CS graduates don't actually signal anything about their suitability for the profession. A significant chunk can't even fizzbuzz.
I'd suggest teaching mathematical models instead of showing loops and recursion in practical code is a large chunk of the problem. Oh, come on. What sornaensis's one sentence summarizes is It's true that his is shorter; it also gives less information.
For the avoidance of doubt, that isn't a problem. Since you declared yourself unable to understand what I wrote, it's fair enough to try to simplify it.
The rest of what I wrote was 1 an answer to your subsidiary question "What am I missing? Of course if I'd known you'd respond as unpleasantly as you did, I wouldn't have bothered trying to be helpful. But at the time I thought you might well be asking a sincere question rather than just wanting to vent about how out of touch those hoity-toity theoreticians are.
I don't know how you came to the conclusion that the quality of CS graduates is poor or how that the cause of this perceived lack of quality is somehow due to courses focusing on theory. In my experience very little in a run of the mill CS undergrad program is dedicated to theory so I might come to the opposite conclusion: So people miss important patterns and concepts because no one explained to them why they are important.
Patterns you have spent years coming to understand intuitively, perhaps. It looks like you've been downvoted a lot. For what it's worth, it wasn't me. I didn't claim that CRUD apps don't have complexity in them. I claimed that CRUD apps don't typically have the sort of thing in them for which this kind of small-scale semi-formal stuff is useful in them. I am happy to stand by that claim; do you actually disagree with it?
I wasn't. There are plenty of CRUD apps that add much more value to the world than almost any piece of code of the sort that invariants are super-helpful for.
Well, doubtless some are, but as you say they commonly have huge amounts of complication in them, of a sort that isn't amenable to the kind of formalized reasoning I was talking about. To answer the implied question in your first sentence: I wasn't trying to make a point , I was giving an answer to a question you asked, namely why mathematical induction might be important for programmers.
Briefer and more explicit version of that answer: For people writing certain kinds of code, understanding mathematical induction makes it easier to reason about that code via techniques such as loop invariants, which makes it easier to make that code bullet-proof.
There are many other kinds of code for which nothing closely related to mathematical induction is of any value at all.
And, to reiterate lest I set the bee in your bonnet buzzing again: Whether a kind of code benefits from this sort of technique has basically nothing to do with how important it is, or how difficult it is to write overall, or what anyone should think of the people writing it.
It's helpful for truly understanding recursion. If it's so useful for "truly" understanding it, why do I have to show so many CS educated juniors how to use recursion? Have to point out to them to use it instead of doing crazy nested loops or other stupid solutions to a problem simply solved using recursion?
Given that I obviously don't "truly" understand it, having never done a CS degree.
I'd posit that most CS students don't truly understand recursion, they just vaguely know the theoretical basis behind a practical skill they have no experience in. Who says they understood induction? You can get a CS degree without understanding induction, and you can understand induction without a CS degree.
I don't know which languages you and your CS educated juniors "juniors" use. A smart CS student would work for a company that valued intellectual capital instead of a company with high turnover. While statements, as implemented in the example above, offer nearly as much access to the input stack as recursive functions; about equal risk of an infinite loop; and don't risk stack overflow. I have a bachelors in Math. So, I was likely taught a more substantial "theoretical basis," and less "practical skill" than the curriculum most CS graduates were taught.
Or about the flaws in assuming perfect knowledge of actors in a system? They are the basis of recursive algorithms proofs, sure you may not need them for everyday programming but that's different for computer science. You need it for a follow-on course, Theory of Computation.
It's freely available, though I recommend steering away from the PDF on their site as it's got quite a few errors that are corrected in the source on Github. I'm a big fan of these books: This document is over pages. How long are you supposed to take to read and understand all of this?
And is this really all necessary? On the surface this looks like an insurmountable task with questionable benefits. Don't get me wrong, but in the past 6 years of casual and professional programming, I've needed only a basic understanding of math, the most difficult thing being collision detection in games, and that includes doing RSA cryptography by hand. This paper starts off with proofs, something I've never had in school and which always seemed to me as if they only belong in scientific math papers, not practical life of someone doing computer science.
I don't mean to criticize the document or math in general, I would genuinely like to understand what pages of this is going to bring me, especially when it starts with something I've never needed or seen outside of theoretical math discussions. In professional programming, most of the time, system design is most crucial and would use less of these mathematics. These pages is not insurmountable.
The benefits of these is hardly questionable. Its use is apparent when you take a Design and Analysis of algorithms course like https: A good example of using knowledge of the pdf is analysing expected runtime of a hashtable. Which turns out to be theta 1 average case. Good analysis of algorithms inspire better design of it in general. Data structure and software engineering courses would probably be sufficient for many software engineering jobs out there. Databases, networks, OS and security are good to have knowledge.
One good property of Mathematics is that it provide guarantees in the form of equality, inequalities or equivalences.
Such guarantees can help you ensure that your system holds quantitatively. I find this remark by Terence Tao particularly good: Yes, mathematics has been, is, and will be at the cutting edge of software engineering.
For historical examples: For current examples: For future examples: Also, consider etymology: There is a field at the cutting edge of computation, computational science, that requires the most mathematical knowledge of any discipline that includes the compute stem.
Math goes beyond providing guarantees of equality, inequality, and equivalence: This is, in fact, a textbook for a discrete mathematics course for year 1 computer science students: This covers intro to proofs, mathematical logic, number theory, graph theory, combinatorics, and probability. In a typical undergrad math curriculum, these would be five or six different courses.
Computer Science & Mathematics
So if you have a two-semester course or self-study covering this page textbook, it's probably more efficient than learning the material in the math department by a factor of 2.
Yes, stuff's been cut. This is more of a survey of these fields with the emphasis on the most important points, and informing you of the most basic parts. So if you encounter one of the more specialized problems that might have been covered in a more specialized course, you'll have hints on where to look, what to Google for, and what sort of reasoning to use.
There's a lot you can do in programming without knowing a ton of math, just like these days you can drive a car quite well without having a degree in mechanical engineering. But just like with a car, the more you know about the deep underpinnings of things, the more you can do. A person with deep knowledge of how their car works may rebuild the engine and overbore the cylinders for more displacement, put in a camshaft with more lift, and replace the factory exhaust manifold with headers with better flow.
A computer programmer who understands the mathematical underpinnings of his work might replace the standard library implementation of HashMap with one that's better suited to his use case, or do his own derivations to figure out how to implement back-propagation in a neural network, etc. No, it's not a perfect analogy, but the point stands I think - having the deep knowledge of the theory and foundations in any field enables you to push the boundaries beyond any "black box" that is provided to you.
I have found the mechanics of mathematical proof invaluable when programming, because they show you how to be very clear about what you do know and what you don't know. Basically "Is this thing just a random example, or something that is true in all cases? In the book, explaining what a proof is, how proofs are typically constructed and how to write a good one takes them ten pages, all written in very down-to-earth language.
Then there are eight pages of problems you could do if you wanted to test your understanding. I bet if you read those ten pages you'd feel there was nothing special there, just obvious ways to reason about things. It depends on what you mean by "read and understand". My approach with something like this is to breeze through it a first time, just to understand the main point of each section and how it all fits together. And you know what to look up, where, when you need it in the future.
And you also know whatever part you might be personally interested in more, and you can go back and read those sections in more detail. This approach works well with all types of non-fiction, and it can actually be really helpful to first "skim" through every non-fiction book you read, before reading in detail -- knowing in advance where the author is going can often help remove a lot of confusion.
It depends on what you want to achieve. Strong fundamentals in maths gives you the tools to solve big problems. The more ambitious the problem you want to help solve, like say a traffic-control system for a high-speed train network, then the more maths you will need.
Mathematics & Computer Science: Past Math Contest Tests & Answers
Synaesthesia on Mar 6, It's hardly insurmountable, but to answer your question, maybe people will read it because you're just curious, want to learn. Also you don't have to learn all, or any of it -maybe you could just use it as a reference book. The exercises are not included in this - so I think hours should suffice. OCW Video lectures for the course - https: It'd be nice if they are able to record Leighton's 6.
Meyer teaches the class 3 out of every 4 semesters in an inverted classroom setting you read and learn topics on your own before lecture and lecture time is spent in smaller groups with TAs helping you solve problems while Leighton teaches the class lecture style which is better for OCW imo.
You might want to check the older version of this course https: I was lucky enough to sit in his lectures: For no particular reason opted to learn from one of the older lecture series. Can confirm that Leighton's lectures are a delight. This book is a really accessible primer on the basics. Much more readable than a lot of other textbooks, which kind of go all-out in terms of rigor rather than present things intuitively.
Reasoning About Computation.
The probability section is especially good. I almost banned you before realizing you were probably talking to your instructor. It's good to build up a substantive comment history before posting like this! I printed off the version yes, the whole thing and it has served as a very valuable reference. I wish there was a way to download a nice printed edition to support the authors.
If they read this - thanks! Also, is a version of the course coming to OCW? I imagine that's why they'd release the updated PDF. Freshman at MIT here taking this class -- the lectures are actually taught in a flipped classroom format, so I wouldn't imagine they would release the course considering there are no lectures to follow.
I could see them releasing problem sets, however. Who is your instructor? Tell them to release a hard copy! Well, shiiiiit. I came to Hacker News to avoid doing my discrete math homework. While this is number 1, guess I'll plug Fleck's textbook from UofI http: On this subject, has anyone taken this https: I'm still brushing up on Calc, but may have to learn discrete math on my own, since Harvard Extension School offers discrete and algorithms during the same semester, and I don't want to wait 2 years before I can take algorithms.
It combines data structures and discrete math into one massive text. If you are in any doubt about your preparation for the class, please come and talk to any one of us as soon as possible. Grading Summary Grading will be on an absolute scale.
Your final grade will be in the range and will be computed as the sum from five categories: 50 points: Homeworks average of all but your lowest hw score, with the average capped at 50 pts. The homeworks will be graded by the course reader; depending on the time available, we reserve the right to grade some of the problems in more detail than others, and to award correspondingly more credit for them. Thus, if you turn in incomplete homeworks you are gambling on your grade.
Collaboration Collaboration on homeworks is welcome and warmly encouraged. You may work in groups of at most three people; however, you must always write up the solutions on your own.
Similarly, you may use references to help solve homework problems, but you must write up the solution on your own and cite your sources. You may not share written work or programs with anyone else. You may not receive help on homework assignments from students who have taken the course in previous years, and you may not review homework solutions from previous years. You will be asked to acknowledge all help you received from others.
This will not be used to penalize you, nor will it affect your grade in any way. Rather, this is intended only for your own protection.
If you work in a group, you'll be required to change group partners after the first midterm. In writing up your homework you are allowed to consult any book, paper, or published material. If you do so, you are required to cite your source s.
Simply copying a proof is not sufficient; you are expected to write it up in your own words, and you must be able to explain it if you are asked to do so.
Your proofs may refer to course material and to homeworks from earlier in the semester.
Except for this, all results you use must be proved explicitly. Copying solutions or code, in whole or in part, from other students or any other source without acknowledgment constitutes cheating. Any student found to be cheating in this class will automatically receive an F grade and will also be referred to the Office of Student Conduct. We believe that most students can distinguish between helping other students and cheating. Explaining the meaning of a question, discussing a way of approaching a solution, or collaboratively exploring how to solve a problem within your group is an interaction that we encourage.
On the other hand, you should never read another student's solution or partial solution, nor have it in your possession, either electronically or on paper. You should write your homework solution strictly by yourself. You must explicitly acknowledge everyone who you have worked with or who has given you any significant ideas about the homework.
Not only is this good scholarly conduct, it also protects you from accusations of theft of your colleagues' ideas. Presenting another person's work as your own constitutes cheating, whether that person is a friend, an unknown student in this class or a previous semester's class, a solution set from a previous semester of this course, or an anonymous person on the Web who happens to have solved the problem you've been asked to solve.
Everything you turn in must be your own doing, and it is your responsibility to make it clear to the graders that it really is your own work. The following activities are specifically forbidden in all graded course work: Possession or theft of another student's solution or partial solution in any form electronic, handwritten, or printed. Giving a solution or partial solution to another student, even with the explicit understanding that it will not be copied.
Working together with anyone outside your homework group to develop a solution that is subsequently turned in either by you or by the other person. Looking up solution sets from previous semesters and presenting that solution, or any part of it, as your own.I skipped two years of math in High School and ended up BSing my way through Calculus without learning any of it, so I'm trying to figure out what I may need to fill the gaps.
Thus, if you turn in incomplete homeworks you are gambling on your grade. Besides the already mentioned "Book of Proof", try: But just like with a car, the more you know about the deep underpinnings of things, the more you can do. MrInvisible on Mar 6, Was about to recommend this. Working together with anyone outside your homework group to develop a solution that is subsequently turned in either by you or by the other person. Strang doesn't teach