<texit info> author=Ivan Savov title=Hacker's manual backgroundtext=off </texit>
[ THROWING IN THE TOWEL ON BECOMMING A PROGRAMMER ]
http://waterstreetgm.org/throwing-in-the-towel-on-becomming-a-programmer/
http://www.pixelstech.net/article/index.php?id=1352219881
http://sysadmincasts.com/episodes/13-crash-course-on-common-commands
[ Command Line Crash Course ]
http://learnpythonthehardway.org/book/appendixa.html
http://viget.com/extend/level-up-your-shell-game
http://matt.might.net/articles/settling-into-unix/ http://matt.might.net/articles/basic-unix/
[ A little collection of cool unix terminal/console/curses tools ]
http://kkovacs.eu/cool-but-obscure-unix-tools/
% http://linoxide.com/guide/linux-command-shelf.html
1. SYSTEM
$ uname –a => Display linux system information $ uname –r => Display kernel release information (refer uname command in detail) $ cat /etc/redhat_release => Show which version of redhat installed $ uptime => Show how long system running + load (learn uptime command) $ hostname => Show system host name $ hostname -i => Display the IP address of the host (all options hostname) $ last reboot => Show system reboot history (more examples last command) $ date => Show the current date and time (options of date command) $ cal => Show this month calendar (what more in cal) $ w => Display who is online (learn more about w command) $ whoami => Who you are logged in as (example + sreenshots) $ finger user => Display information about user (many options of finger command)
2. HARDWARE
$ dmesg => Detected hardware and boot messages (dmesg many more options) $ cat /proc/cpuinfo => CPU model $ cat /proc/meminfo => Hardware memory $ cat /proc/interrupts => Lists the number of interrupts per CPU per I/O device $ lshw => Displays information on hardware configuration of the system $ lsblk => Displays block device related information in Linux (sudo yum install util-linux-ng) $ free -m => Used and free memory (-m for MB) (free command in detail) $ lspci -tv => Show PCI devices (very useful to find vendor ids) $ lsusb -tv => Show USB devices (read more lsusb options) $ lshal => Show a list of all devices with their properties $ dmidecode => Show hardware info from the BIOS (vendor details) $ hdparm -i /dev/sda # Show info about disk sda $ hdparm -tT /dev/sda # Do a read speed test on disk sda $ badblocks -s /dev/sda # Test for unreadable blocks on disk sda
3. STATISTICS
$ top => Display and update the top cpu processes (30 example options) $ mpstat 1 => Display processors related statistics (learn mpstat command) $ vmstat 2 => Display virtual memory statistics (very useful performance tool) $ iostat 2 => Display I/O statistics (2sec Intervals) (more examples) $ tail -n 500 /var/log/messages => Last 10 kernel/syslog messages (everyday use tail options) $ tcpdump -i eth1 => Capture all packets flows on interface eth1 (useful to sort network issue) $ tcpdump -i eth0 'port 80' => Monitor all traffic on port 80 ( HTTP ) $ lsof => List all open files belonging to all active processes.(sysadmin favorite command) $ lsof -u testuser => List files opened by specific user $ free –m => Show amount of RAM (daily usage command) $ watch df –h => Watch changeable data continuously(interesting linux command)
4. USERS
$ id => Show the active user id with login and group(with screenshot) $ last => Show last logins on the system (few more examples) $ who => Show who is logged on the system(real user who logged in) $ groupadd admin => Add group "admin" (force add existing group) $ useradd -c "Sam Tomshi" -g admin -m sam => Create user "sam" and add to group "admin"(here read all parameter) $ userdel sam => Delete user sam (force,file removal) $ adduser sam => Add user "sam" $ usermod => Modify user information(mostly useful for linux system admins)
5. FILE COMMANDS
$ ls –al => Display all information about files/ directories(20 examples) $ pwd => Show current directory path(simple but need every day) $ mkdir directory-name => Create a directory(create mutiple directory) $ rm file-name => Delete file(be careful of using rm command) $ rm -r directory-name => Delete directory recursively $ rm -f file-name => Forcefully remove file $ rm -rf directory-name => Forcefully remove directory recursively $ cp file1 file2 => Copy file1 to file2 (15 cd command examples) $ cp -r dir1 dir2 => Copy dir1 to dir2, create dir2 if it doesn’t exist $ mv file1 file2 => Move files from one place to another(with 10 examples) $ ln –s /path/to/file-name link-name => Create symbolic link to file-name (examples) $ touch file => Create or update file (timestamp change) $ cat > file => Place standard input into file (15 cat command examples) $ more file => Output the contents of file (help display long tail files) $ head file => Output the first 10 lines of file (with different parameters) $ tail file => Output the last 10 lines of file (detailed article with tail options) $ tail -f file => Output the contents of file as it grows starting with the last 10 lines $ gpg -c file => Encrypt file (how to use gpg) $ gpg file.gpg => Decrypt file
6. PROCESS RELATED
$ ps # Display your currently active processes (many parameters to learn) $ ps aux | grep 'telnet' # Find all process id related to telnet process $ pmap # Memory map of process (kernel,user memory etc) $ top # Display all running processes (30 examples) $ kill pid # Kill process with mentioned pid id (types of signals) $ killall proc # Kill all processes named proc $ pkill processname # Send signal to a process with its name $ bg # Resumes suspended jobs without bringing them to foreground (bg and fg command) $ fg # Brings the most recent job to foreground $ fg n # Brings job n to the foreground
7. FILE PERMISSION RELATED
$ chmod octal file-name # Change the permissions of file to octal , which can be found separately for user, group and world octal value (more examples) 4 - read 2 – write 1 – execute Example $ chmod 777 /data/test.c # Set rwx permission for owner , rwx permission for group, rwx permission for world $ chmod 755 /data/test.c # Set rwx permission for owner,rx for group and world $ chown owner-user file # Change owner of the file (chown more examples) $ chown owner-user:owner-group file-name # Change owner and group owner of the file $ chown owner-user:owner-group directory # Change owner and group owner of the directory Example $ chown bobbin:linoxide test.txt $ ls -l test.txt -rw-r--r-- 1 bobbin linoxide 0 Mar 04 08:56 test.txt
8. NETWORK
$ ifconfig –a # Display all network ports and ip address (set mtu and other all options) $ ifconfig eth0 # Display specific ethernet port ip address and details $ ip addr show # Display all network interfaces and ip address(available in iproute2 package,powerful than ifconfig) $ ip address add 192.168.0.1 dev eth0 # Set ip address $ ethtool eth0 # Linux tool to show ethernet status (set full duplex , pause parameter) $ mii-tool eth0 # Linux tool to show ethernet status (more or like ethtool) $ ping host # Send echo request to test connection (learn sing enhanced ping tool) $ whois domain # Get who is information for domain $ dig domain # Get DNS information for domain (screenshots with other available parameters) $ dig -x host # Reverse lookup host $ host google.com # Lookup DNS ip address for the name (8 examples of host command) $ hostname –i # Lookup local ip address (set hostname too) $ wget file # Download file (very useful other option) $ netstat -tupl # Listing all active listening ports(tcp,udp,pid) (13 examples)
9. COMPRESSION / ARCHIVES
$ tar cf home.tar home # Create tar named home.tar containing home/ (11 tar examples) $ tar xf file.tar # Extract the files from file.tar $ tar czf file.tar.gz files # Create a tar with gzip compression $ gzip file # Compress file and renames it to file.gz (untar gzip file)
10. INSTALL PACKAGE
$ rpm -i pkgname.rpm # Install rpm based package (Installing, Uninstalling, Updating, Querying ,Verifying) $ rpm -e pkgname # Remove package Install from source ./configure make make install (what it is)
11. SEARCH
$ grep pattern files # Search for pattern in files (you will this command often) $ grep -r pattern dir # Search recursively for pattern in dir $ locate file # Find all instances of file $ find /home/tom -name 'index*' # Find files names that start with "index"(10 find examples) $ find /home -size +10000k # Find files larger than 10000k in /home
12. LOGIN (SSH AND TELNET)
$ ssh user@host # Connect to host as user (secure data communication command) $ ssh -p port user@host # Connect to host using specific port $ telnet host # Connect to the system using telnet port
13. FILE TRANSFER
scp $ scp file.txt server2:/tmp # Secure copy file.txt to remote host /tmp folder $ scp nixsavy@server2:/www/*.html /www/tmp # Copy *.html files from remote host to current system /www/tmp folder $ scp -r nixsavy@server2:/www /www/tmp # Copy all files and folders recursively from remote server to the current system /www/tmp folder rsync $ rsync -a /home/apps /backup/ # Synchronize source to destination $ rsync -avz /home/apps linoxide@192.168.10.1:/backup # Synchronize files/directories between the local and remote system with compression enabled
14. DISK USAGE
$ df –h # Show free space on mounted filesystems(commonly used command) $ df -i # Show free inodes on mounted filesystems $ fdisk -l # Show disks partitions sizes and types(fdisk command output) $ du -ah # Display disk usage in human readable form (command variations) $ du -sh # Display total disk usage on the current directory $ findmnt # Displays target mount point for all filesystem (refer type,list,evaluate output) $ mount device-path mount-point # Mount a device
15. DIRECTORY TRAVERSE
$ cd .. # To go up one level of the directory tree(simple & most needed) $ cd # Go to $HOME directory $ cd /test # Change to /test directory
http://www.joshstaiger.org/archives/2005/07/bash_profile_vs.html
[ Safe mode ]
http://redsymbol.net/articles/unofficial-bash-strict-mode/
http://bytemuse.com/post/the-terminal-crash-course-and-reference-sheet/
bash usefulness
http://www.fizerkhan.com/blog/posts/What-I-learned-from-other-s-shell-scripts.html
[ robust scripts ]
http://www.davidpashley.com/articles/writing-robust-shell-scripts/
[ Shell scripting guide from google ]
http://google-styleguide.googlecode.com/svn/trunk/shell.xml#Indentation
links to resources to learn UNIX tools https://news.ycombinator.com/item?id=2415365
bash mistakes http://www.pixelbeat.org/programming/shell_script_mistakes.html
less common commans
http://www.danielmiessler.com/blog/collection-of-less-commonly-used-unix-commands
http://www.kfirlavi.com/blog/2012/11/14/defensive-bash-programming/
[ The grammar of vim ]
http://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118
https://gist.github.com/gmccreight/7519289 via http://nerds.weddingpartyapp.com/tech/2013/11/17/mastering-vim-in-vim/
[ A tutorial ]
http://www.danielmiessler.com/study/vim/
http://serverfault.com/questions/127795/tar-a-directory-and-only-include-certain-file-types
find tilesets -type f \( -name \*\.png -o -name \*\.GIF -o -name \*\.gif -o -name \*\.PNG -o -name \*\.BMP -o -name \*\.bmp \) -print0 | xargs -t0 tar -cvzf tilesets_new.tgz
git remote add upstream <git repository you want to contribute to> git remote update
git checkout -b upstream upstream/master git cherry-pick <SHA1 hash of bugfix commit> git cherry-pick <SHA1 hash of another commit to add> git push origin upstream
and then I see my upstream branch on github, switch to it and can submit the pull request with just the changes I want.
Installing GIT on Windows (using cygwin)
In the ~ folder : ssh-keygen -C “me@some.where” -t rsa
(This will generate your ssh keys into the ~/.ssh folder. If you check into you'll see an id_rsa and id_rsa.pub files) - Copy the ssh key into the clipboard : cat .ssh/id_rsa.pub | putclip - Go to https://github.com/. - In your account part (if you've already signed up), copy the public key (just do ctl+V into the field, this will paste the clipboard). This should look like this : ssh-rsa AAAAB3NzaC1yc ...a lot of things... xX88+2JViQ== me@some.where
via http://redmine.jamoma.org/projects/jamoma/wiki/Installing_and_setting_up_GIT
Things to include
Data Structures Linked Lists Binary Trees Tries (not try/catch) Stacks Queues Vectors / ArrayLists Hash Tables
Algorithms Breadth First Search Depth First Search Binary Search Merge Sort Quick Sort Tree Insert / Find / etc.
Concepts Bit Manipulation Singleton Design Pattern Factory Design Pattern Memory (Stack v. Heap) Recursion Big-O Time
[ Running time in plain english ]
http://stackoverflow.com/questions/487258/plain-english-explanation-of-big-o
Some important algorithms for scientific purpuses
01. Metropolis Algorithm for Monte Carlo: http://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_alg…
02. Simplex Method for Linear Programming: http://en.wikipedia.org/wiki/Simplex_algorithm
03. Krylov Subspace Iteration Methods: http://en.wikipedia.org/wiki/Krylov_subspace
04. The Decompositional Approach to Matrix Computations: http://dl.acm.org/citation.cfm?id=615766
05. The Fortran Optimizing Compiler : http://www.ibiblio.org/pub/languages/fortran/ch1-1.html (just the starting page)
06. QR Algorithm for Computing Eigenvalues : http://en.wikipedia.org/wiki/QR_algorithm
07. Quicksort Algorithm for Sorting : http://en.wikipedia.org/wiki/Quicksort
08. Fast Fourier Transform : http://en.wikipedia.org/wiki/Fast_Fourier_transform
09. Integer Relation Detection : http://en.wikipedia.org/wiki/Integer_relation_algorithm
10. Fast Multipole Method : http://en.wikipedia.org/wiki/Fast_multipole_method
[ SIAM put out a 'ten algorithms of the century' ]
https://www.siam.org/pdf/news/637.pdf
[ Algorithms in Linux kernel and CHromo web browser ]
http://cstheory.stackexchange.com/questions/19759/core-algorithms-deployed/19773#19773
[ Simple algorithms for students to implement as assignments ]
http://en.docsity.com/news/interesting-facts/great-algorithms-revolutionized-computing/
[ Algorithms in JS ]
https://github.com/nzakas/computer-science-in-javascript/
[ Functional data structures ]
http://cstheory.stackexchange.com/a/1550/7273
[ CS interview questions with C++ code ]
http://www.grokit.ca/spc/computer_science_review/
[ Visualization of data structures ]
http://www.cs.usfca.edu/~galles/visualization/Algorithms.html
[ Good overview of language and APIs ]
https://gist.github.com/yig/8744917
[ Learning JavaScript stories ]
https://news.ycombinator.com/item?id=5434627
[ Tutorials and info ]
http://superherojs.com/
[ Design patterns ]
http://shichuan.github.com/javascript-patterns/
via https://news.ycombinator.com/item?id=5392984
[ JavaScript promises explained nicely ]
http://ianbishop.github.com/blog/2013/01/13/escape-from-callback-hell/
[ JavaScript functions ]
http://markdaggett.com/blog/2013/02/15/functions-explained/
[ Overview of js from Crockford ]
http://jaxenter.com/douglas-crockford-java-was-a-colossal-failure-javascript-is-succeeding-because-it-works-45928.html
[ Gotchas ]
http://www.jblotus.com/2013/01/13/common-javascript-gotchas/
[ Exercises ]
http://blog.bolshchikov.net/post/40917260776/javascript-quiz-set
[ Currying and partial application ]
http://raganwald.com/2013/03/07/currying-and-partial-application.html
[ quizz – language quirks and edge cases ]
http://javascript-puzzlers.herokuapp.com/
https://developer.mozilla.org/en-US/docs/Web/JavaScript/A_re-introduction_to_JavaScript
[ Nice intro JS tutorials ]
http://tutorials.jumpstartlab.com/
[ JS data structures ]
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Data_structures
[ App structure ]
http://weblog.bocoup.com/organizing-your-backbone-js-application-with-modules/
[ Full walkthrough with model hierarchies etc ]
http://blog.pamelafox.org/2013/07/a-guide-to-writing-backbone-apps-at.html?utm_source=javascriptweekly&utm_medium=email
[ List of ressources ]
https://github.com/jashkenas/backbone/wiki/Tutorials%2C-blog-posts-and-example-sites
[ A simple tutorial ] https://github.com/204NoContent/backbone-phonecat
[ The best free JavaScript resources by revolunet - rich web apps for desktop and mobile ]
http://jsbooks.revolunet.com/
[ list of tools for web development ]
https://news.ycombinator.com/item?id=5415081
[ Chrome web tools tutorial ]
http://discover-devtools.codeschool.com/
Bubbling vs Capturing http://www.quirksmode.org/js/events_order.html
hadouken animation in jQuery
http://www.thinkful.com/learn/intro-to-jquery
https://developer.mozilla.org/en-US/
never use http://www.w3schools.com/ - reason → http://www.w3fools.com/
http://wiki.commonjs.org/wiki/Modules/1.1
https://github.com/amdjs/amdjs-api/wiki/AMD
https://developer.mozilla.org/en-US/docs/Web/HTML/Using_HTML5_audio_and_video
http://www.slideshare.net/remy.sharp/html5-js-apis
http://dev.opera.com/articles/view/better-performance-with-requestanimationframe/
https://developers.google.com/web-toolkit/
https://github.com/clojure/clojurescript
http://www.infoq.com/presentations/ClojureScript
[ Lots of resources for learning commands ]
http://www.linfo.org/
http://bsdpunk.blogspot.ca/2013/09/facebook-first-level-interview-for.html
In March I had an interview with Facebook, and I wrote the questions down, at least during the ones in which I had enough time to do so.
14 Question Linux Test, if you get a switch wrong, you get the question wrong.
1. Port for ssh and dns 2. What is an MX record 3. What do you run to check the consistency of the kernel and file system 4. What command to display current cpu usage, ram ussage and process list 5. Now this question was worded funny he said in a manner that I thought he was talking about the time on uptime But he meant what 3 other pieces of information are included in uptime so I got this question wrong(said hours minutes seconds) 6. 3 Timestamps *nix keeps on every file 7 and 8 I didn't write anything down for it must have been stressful at that part of the interview I think one of these I missed it was what signal does kill send, and I said sigkill or something instead of sigterm 9. What does $$ and $2 in bash do 10. I didn't have anything written down for. 11. He asked what to show the routing table? 12. TCP Handshake 13. What does MTU stand for, and what is the MTU for ethernet? and by accident I got the last one write 14. What Signal does a child send to a parent when it is finished.
Answers I think are correct. I didn't graduate to the next interview so, yeah don't take my word for it. 1. 22 and 53 2. dns Mail records 3. fsck 4. top 5. the current time, the length of time the system has been up, the number of users, and the load average of the system over the last 1, 5, and 15 minutes 6. created, modified, last access 7. ? 8. ? 9. process id and 2nd arguement for a bash script 10. ? 11. route or netstat -nr 12. Syn SynAck Ack 13. Maximum Transmission Unit, 1500 14. sigchild
[ Sysadmin Casts - simple bite sized sysadmin screencasts ]
http://sysadmincasts.com/
[ Network programming guide ]
http://www.beej.us/guide/bgnet/output/html/singlepage/bgnet.html
[ Network programming: TCP, UDP and limitations ]
http://news.ycombinator.com/item?id=5256751
quote: Inventing your own protocol is a rite of passage for any network programmer.
[ Mini simulated networks ]
https://github.com/mininet/mininet/wiki/Documentation
[ TCP IP guide ]
http://tcpipguide.com/free/t_toc.htm
[ TCP/IP explained ]
http://www.objc.io/issue-10/ip-tcp-http.html
[ Visualization of DNS Query and Response ]
http://m-mizutani.github.io/blog/2014/02/10/dnsviz-en/
Confuse people SSHing to your host with a redirect back to theirs.
socat -d -d TCP-L:22,reuseaddr,fork SYSTEM:"nc \$SOCAT_PEERADDR 22"
via https://twitter.com/climagic/status/452079585665224704
ssh kund fu
http://blog.tjll.net/ssh-kung-fu/
[ https: explained nicely ]
https://www.tbray.org/ongoing/When/201x/2012/12/02/HTTPS
[ More details ]
http://www.moserware.com/2009/06/first-few-milliseconds-of-https.html
[ How to secure a linux server ] http://spenserj.com/blog/2013/07/15/securing-a-linux-server/ https://news.ycombinator.com/item?id=6384603
[ Basic security for running websites ]
https://news.ycombinator.com/item?id=5594739
[ Security research papers ]
http://www.covert.io/security-datascience-papers/
Computers talk to each other using protocols. One of the most important one is the IP protocol. In this section I will describe the usual steps, and the protocols used in doing as simple as going to yahoo.ca in your internet browser.
Each protocol the standard block of data which it can transport, and we call these chunks of data packets.
The data packets of one protocol can be enclosed inside packets of another ptocol, and in fact this is commonly the case.
By going to yahoo.com/news.html
, you are sending
the HTTP packet “GET /index.html”, wrapped in a TCP
packet which is itself wrapped in an IP packet, which
sent during a data frame over the ethernet.
eth( IP(to=12.3.134.21, data=TCP( ..., data=HTTP("GET /index.html") ) ) )
Lots of links
http://www.smashingmagazine.com/2014/02/14/learning-resources-roundup/
[ Getting Started with Modern Web Development ]
http://modernweb.com/2014/07/16/getting-started-modern-web-development/
[ Web Development and Design Blogs ]
http://blog.trackduck.com/2014/07/21/10-web-development-design-blogs-need-bookmark/
short tutotial from the ground up \\` https://github.com/mjhea0/thinkful-html
links in http://www.allanberger.com/post/49938151321/how-to-make-your-apps-retina-ready
http://cssarrowplease.com/ http://css3generator.com/ http://www.spritecow.com/
[ syntax autocorrector ] http://kev.inburke.com/kevin/helping-you-get-html-right/ https://github.com/kevinburke/tecate
http://learn.shayhowe.com/advanced-html-css/performance-organization
[ html5 intro ]
http://mathiasbynens.be/notes/html5-levels
[ page layout that changes depending on window size ]
http://learn.shayhowe.com/advanced-html-css/responsive-web-design
Absolute Centering in CSS
.Absolute-Center { margin: auto; position: absolute; top: 0; left: 0; bottom: 0; right: 0; }
Creating Non-Rectangular Layouts with CSS Shapes
http://sarasoueidan.com/blog/css-shapes/index.html
THE COMPUTER GRAPHICS LIBRARY
http://fabiensanglard.net/Computer_Graphics_Principles_and_Practices/index.php
debuggin and showing client https://ngrok.com/usage
now there is an app for that
http://www.chrishardie.com/2013/05/end-of-website-development/
[ Shortcuts and idioms of python ]
http://maxburstein.com/blog/python-shortcuts-for-the-python-beginner/
http://pythonbooks.revolunet.com/
[ How to Think Like a Computer Scientist Learning with Python ]
http://openbookproject.net/thinkcs/python/english2e/
[ Python generators explained ]
http://stackoverflow.com/questions/231767/the-python-yield-keyword-explained
multiprocessing in 4 lines
https://medium.com/p/40e9b2b36148
[ pip and virtualenv ] http://dabapps.com/blog/introduction-to-pip-and-virtualenv-python/
[ How virtualenv works ]
https://www.hackerschool.com/blog/14-there-is-no-magic-virtualenv-edition
Lorts of usefuls scripts
http://www.fmwconcepts.com/imagemagick/
[ Links to useful ressources for getting started ]
http://lurnq.com/lesson/Getting-started-with-Python-Tips-Tools-and-Resources/
[ iPython with cloud backend – cheap cluster ]
http://badhessian.org/2013/11/cluster-computing-for-027hr-using-amazon-ec2-and-ipython-notebook/
[ A list of links to Python software projects ]
http://strata.oreilly.com/2013/03/python-data-tools-just-keep-getting-better.html
Learn regular expressions in about 55 minutes
http://qntm.org/files/re/re.html
<texit info> author=Ivan Savov title=Sympy tutorial backgroundtext=off </texit>
Using a computer algebra system (CAS), you can quicklycarry out mathematical calculations like: computing mathematical expressions, solving equations, performing calculus procedures, and simulating physics systems. Computer algebra systems are great when you're learning math, because they will help you simplify complicated expressions and solve hairy equations.
All computer algebras systems offer essentially the same functionality
so it doesn't matter which one you'll use:
the free ones like sympy
, Magma
, and Octave
,
or the commercial ones Maple
, MATLAB
, and Mathematica
.
This tutorial is an introduction to sympy
,
which is a symbolic computer algebra system written in python
.
In a symbolic CAS numbers and operations are represented precisely so the answers you obtain are exact.
For example,
in sympy
the number $\sqrt{2}$ is represented as the object Pow(2,1/2)
,
whereas in numerical computer algebra systems like octave
the number $\sqrt{2}$ is
represented as the approximation $1.41421356237310$ (a float
).
For most purposes the approximation is okay,
but sometimes approximations can lead to problems:
float(sqrt(2))*float(sqrt(2)) = 2.00000000000000044
$\neq 2$.
Using sympy
, you won't run into such problems: Pow(2,1/2)*Pow(2,1/2)=2
$= 2$.
Do not use sympy
to cheat on your homework!
The whole point of homework is for you to suffer.
Mathematical skill is developed through mathematical suffering—only
when trying to solve a problem that you haven't solved before will you
be forced to think and practice your skills.
You can use sympy
to check your answers though.
In fact, one of the best ways to learn math is to invent
practice problems for yourself, solve them by hand, and then check your answers using sympy
.
This tutorial is organized as follows.
We'll begin by introducing the basics of using sympy
and the bread-and-butter functions
used for calculating, manipulating expressions, and solving equations.
Afterward, we'll discuss the sympy
functions that implement calculus operations like differentiation and integrations.
We'll also introduce the sympy
functions used to deal with vectors and complex numbers.
Later we'll see how to use vectors and calculus operations to understand Newtonian mechanics.
In the last section, we'll introduce the linear algebra functions available in sympy
.
The easiest way to use sympy
,
provided you're connected to the Internet,
is to visit ''live.sympy.org''.
You'll be presented with an interactive prompt into which
you can enter your commands—right from your browser.
If you want to use sympy
on your own computer,
you must install python
and the sympy
python package.
You can then open a command prompt and start a sympy
session using:
you@host$ python Python X.Y.Z [GCC a.b.c (Build Info)] on platform Type "help", "copyright", or "license" for more information. >>> from sympy import * >>>
The »>
prompt indicates you are in the python shell which accepts python commands.
The python command from sympy import *
imports all the sympy functions into the current namespace.
All the sympy
functions are now available to you.
To exit the python shell press CTRL+D
:
>>> ( press CTRL + D ) you@host$
I highly recommend that you also install ipython
,
which is an improved interactive python shell.
If you have ipython and sympy installed,
you can start an ipython shell with sympy
pre-imported using the command isympy
.
For an even better experience,
you can try ipython notebook
, which is a web frontend for the ipython
shell.
Each section in this tutorial begins with a specific python import
statement,
which indicates the functions used in that section.
If you use the statement from sympy import *
in the beginning of your code,
you don't need to run these individual import statements,
but it's a good idea to look at them so you'll know what part of your
sympy
vocabulary will be developed in that section.
In this section we'll learn about the basic sympy
objects and the math operations we can carry out on them.
We'll learn the sympy
equivalents of many of the math verbs we used throughout the book:
“to solve” (an equation),
“to simplify” (an expression),
“to expand” (an expression), etc.
This section serves a dual purpose. It serves to introduce readers familiar with the fundamentals of math to the power of computers, so they will never have to suffer through another tedious arithmetic calculation done by hand again. This section also serves as a review of the fundamental concepts of mathematics for computer people, who may have forgotten their high school math.
The basic number objects in sympy
are the python
numbers: int
s and float
s:
>>> 3 3 # an int >>> 3.0 3 # a float
Special care is required when specifying rational numbers,
because the int
division doesn't return the right answer.
>>> 1/7 0 # an int
If you force float
division by using 1.0
instead of 1
you'll obtain:
1.0/7 0.14285714285714285 # a float
This is better, we only have a float
approximation of the number $\frac{1}{7} \in \mathbb{Q}$.
To obtain an exact representation of $\frac{1}{7}$ you need to create a sympy
expression.
You can sympify
an expression using the shortcut function S()
:
S('1/7') 1/7 # Rational(1,7)
We could have achieved the same result using S('1')/7
since,
the sympy
object divided by an int
is a sympy
object.
Apart from the tricky division operator,
the other math operators: +
, -
, and *
work as you would expect.
The syntax *EXP*
is used in python to denoted exponents:
>>> 2**10 1024
When solving math problems,
it is always preferable to work with sympy
objects,
and only compute the numeric answer in the end.
To obtain a numeric approximation of a sympy
object as a float
,
call its .evalf()
method:
>>> pi pi >>> pi.evalf() 3.14159265358979
The method .n()
and global function N()
are equivalent to .evalf()
.
You can also specify the number of digits of precision you want as an argument
to the above functions: pi.n(400)
computes an approximation of $\pi$ to 300 decimals.
from sympy import Symbol, symbols
Python is a civilized language so you don't need to define variables before you use them. When you write
>>> a = 3
you define a new name a
and set it the value 3
.
From now on you can use a
in your calculations.
Most interesting sympy
calculations require us to define some symbols
,
which are the basic math objects in sympy
.
For your convenience, when [http://live.sympy.org|live.sympy.org
] starts,
it runs the following commands for you:
>>> from __future__ import division >>> from sympy import * >>> x, y, z, t = symbols('x y z t') >>> k, m, n = symbols('k m n', integer=True) >>> f, g, h = symbols('f g h', cls=Function)
THe first statements instructs python to convert 1/7
to 1.0/7
when dividing,
thus potentially saving you from the int
division confusion.
The second statement imports all the sympy
functions.
The remaining statements define some generic symbols x
, y
, z
, and t
,
and some other symbols with special properties (to be discussed later).
Note the difference between the following two statements:
>>> x + 2 x + 2 # an Expr >>> p + 2 NameError: name 'p' is not defined
The name x
was defined to be a symbol so sympy
knows that x + 2
is an expression,
but the variable p
was not defined so sympy
doesn't know what to make of p + 2
.
If you want to use p
in expressions,
you must first define it as a symbol:
>>> p = Symbol('p') # the same as p = symbols('p') >>> p+2 p + 2
You can define a sequence of variables using the following notation:
>>> a0, a1, a2, a3, a4 = symbols('a0:4')
You can use any name you want for a variable,
but it's best if you avoid the letters in the sting QCOSINE
because they have a special use in sympy
:
I
is the unit imaginary number $i\equiv\sqrt{-1}$,
E
is the base of the natural logarithm,
S()
is the sympify
function,
N()
is used to obtain numeric approximations,
O
is used for big-O notations, etc.
The underscore symbol _
is a special variable that contains the result of the last return value.
The variable _
is analogous to the ans
button on certain calculators,
and is useful in multi-step calculations:
>>> 3+3 6 >>> _*2 12
>>> from sympy import simplify, factor, expand, collect
You define sympy
expressions by combining symbols,
the basic math operations, and other functions:
>>> expr = 2*x + 3*x - sin(x) - 3*x+42 >>> simplify(expr) 2*x - sin(x) + 42
The function simplify
can be used with any expression to simplify it.
The examples below illustrate some other useful sympy
functions
that correspond to common mathematical operations on expressions:
>>> factor( x**2-2*x-8 ) (x - 4)*(x + 2) >>> expand( (x-4)*(x+2) ) x**2 - 2*x - 8 >>> collect(x**2 + x*b + a*x + a*b, x) x**2 + (a+b)*x + a*b
To substitute a given value into an expression,
we call the .subs()
method, passing in a python dictionary object
with the variable–value substitutions you want to make:
>>> expr = sin(x) + cos(y) >>> expr sin(x) + cos(y) >>> expr.subs({x:1, y:2}) sin(1) + cos(2) >>> expr.subs({x:1, y:2}).n() 0.425324148260754
Note we used the .n()
method to obtain a the numeric value of the expression.
>>> from sympy import solve
The function solve
is the workhorse of sympy
.
This incredibly powerful function knows how to solve all kinds of equations.
You can throw pretty much any equation at solve
to find the solutions.
The function solve
is so powerful that it can be somewhat disheartening for high school
students to find out about it—they'll wonder why they wasted 5 years of their life learning
about various functions and ways to solve equations when solve
can do all the work for them.
Don't worry,
The function solve
takes two arguments.
The first argument specifies the equation you want to solve expressed in the form expr==0
.
The second argument is the variable you want to solve for.
For example,
to solve the quadratic equation $x^2+2x-8=0$, we use
>>> solve( x**2 + 2*x - 8, x) [2, -4]
In this case the equation has two solutions so solve
returns a list.
Check that $x=2$ and $x=-4$ satisfy the equation $x^2+2x-8=0$.
The best part about solve
and sympy
is that you can obtain symbolic answers when solving equations.
Instead of solving one specific quadratic equation,
we can solve all possible equations of the form $ax^2 + bx+c=0$ using
>>> a, b, c = symbols('a b c') >>> solve( a*x**2 + b*x + c, x) [(-b + sqrt(b**2 - 4*a*c))/(2*a), (-b-sqrt(b**2-4*a*c))/(2*a)]
In this case solve
calculated the solution in terms of the symbols a
, b
, and c
.
You should be able to recognize the solution in this case corresponds to the quadratic formula.
You can verify that subs
tiuting the coefficients $a=1$, $b=2$, and $c=-8$ into general solutions
results in the same answer as we obtained above:
>>> gen_sol = solve( a*x**2 + b*x + c, x) >>> [ gen_sol[0].subs({'a':1,'b':2,'c':-8}), gen_sol[1].subs({'a':1,'b':2,'c':-8}) ] [2, -4]
To solve a system of equations, you can feed solve
with the list of equations
as the first argument and specify a list of unknowns to solve for as the second argument.
For example to solve for $x$ and $y$ in the system of equations $x+y =3$ and $3x-2y=0$,
use:
>>> solve([x + y - 3, 3*x - 2*y], [x, y]) {x: 6/5, y: 9/5}
The function solve
is like a swiss army knife you can use to solve all kind of problems.
For example, suppose you want to completing the square in the expression $x^2-4x+7$,
that is, you want to find constants $h$ and $k$ such that $x^2-4x+7 = (x-h)^2 +k$.
There is no special purpose “complete the square” function in sympy
,
but you call solve
on the equation $(x-h)^2 +k \ - \ (x^2-4x+7) = 0$
to find the unknowns $h$ and $k$:
>>> h, k = symbols('h k') >>> solve( (x-h)**2 + k - (x**2-4*x+7), [h,k] ) [(2, 3)] # so h = 2 and k = 3 >>> ((x-2)**2+3).expand() # verify... x**2 - 4*x + 7
>>> from sympy import together, apart
By default, sympy
will not put together or split apart rational expressions.
You have to use together
to calculate symbolically the addition of fractions:
>>> togeter( 1 + 1/(x + 2) ) (x + 3)/(x + 2)
Alternately, if you have a rational expression an you want to divide the numerator by the denominator,
use the apart
function:
>>> apart( (x**2+x+4)/(x+2) ) x - 1 + 6/(x + 2)
Euler's constant $e=2.71828\ldots$ in defined one of several ways:
\[
e \equiv \lim_{n\to \infty} \left( 1 + \frac{1}{n}\right)^{n}
\equiv \lim_{\epsilon \to 0} \left( 1 + \epsilon\right)^{1/\epsilon}
\equiv \sum_{n=0}^\infty \frac{1}{n!},
\]
is denoted E
in sympy
.
The funtion exp(x)
is equivalent to E*EXP*x
.
The function log
computes the natural logarithm (logarithm base $e$):
>>> log(E**3) # same as ln(E**3) 3
By default, the sympy
assumes the inputs to functions like exp
and log
are complex numbers,
so it will not expand certain logarithmic expressions.
However, indicating to sympy
that the inputs are real numbers will make the epxations work:
>>> x, y = symbols('x y') >>> log(x*y).expand() log(x*y) >>> a, b = symbols('a b', positive=True) >>> log(a*b).expand() log(a) + log(b)
Let's define a polynomial $P$ that has roots at $x=1$, $x=2$ and $x=3$.
>>> P = (x-1)*(x-2)*(x-3) >>> P (x - 1)*(x - 2)*(x - 3)
To see the expaned version of the polynomial,
call its expand
method:
>>> P.expand() x**3 - 6*x**2 + 11*x - 6
When you see the polynomial in it's expanded form $P(x)=x^3-6^2 + 11x - 6$,
you can't immediately tell what it's roots are.
This is why the factored form $P(x)=(x-1)(x-2)(x-3)$ is preferable.
To factor a polynomial simply call its factor
method:
>>> P.factor() (x - 1)*(x - 2)*(x - 3)
Or, you can call the more generic simplify
which is used to “simplify” any expression:
>>> P.simplify() (x - 1)*(x - 2)*(x - 3)
Recall that the roots of the polynomial are defined as the values of $x$ where $P(x)=0$.
We can use the solve
function to find the roots of the polynomial:
>>> roots = solve(P,x) >>> roots [1, 2, 3] # let's check if equal >>> simplify( P - (x-roots[0])*(x-roots[1])*(x-roots[2]) ) 0
Note we used the simplify
function to check whether two expressions are equal in the previous section.
This way of checking equality works because $P=Q$ if and only if $P-Q=0$.
This is the best for if two expeession are equal in sympy
because it takes care of all possible simplifications while comparing the expressions.
Below is a list of other ways for checking if two quantities are equal,
and some examples where they work and where they fail:
>>> p = (x-5)*(x+5) >>> q = x**2 - 25 >>> p == q # fail False >>> p - q == 0 # fail False >>> expand(p - q) == 0 True >>> factor(p - q) == 0 True >>> sin(x)**2 + cos(x)**2 == 1 # fail False >>> simplify( sin(x)**2 + cos(x)**2 - 1) == 0 True
from sympy import sin, cos, tan, trigsimp, expand_trig
The trigonometric functions sin
and cos
take inputs in radians:
>>> sin(pi/6) 1/2 >>> cos(pi/6) sqrt(3)/2
For angles in degrees, you need the conversiton factor of $\frac{\pi}{180}$[rad/$\circ$] like this:
>>> sin(30*pi/180) 1/2
The inverse trigonometric functions are $\sin^{-1}(x)\equiv \arcsin(x)\equiv$asin(x)
and $\cos^{-1}(x)\equiv \arccos(x)\equiv$acos(x)
:
>>> asin(1/2) pi/6 >>> acos(sqrt(3)/2) pi/6
Recall that $\tan(x) \equiv \frac{\sin(x)}{\cos(x)}$.
The inverse function of $\tan(x)$ is $\tan^{-1}(x)\equiv \arctan(x)\equiv$atan(x)
>>> tan(pi/6) 1/sqrt(3) # == ( 1/2 )/( sqrt(3)/2 ) >>> atan( 1/sqrt(3) ) pi/6
Here are some trigonometric identities that sympy
knows about:
>>> sin(x) == cos(x - pi/2) True
>>> simplify( sin(x)*cos(y)+cos(x)*sin(y) ) sin(x + y) >>> trigsimp(sin(x)*cos(y)+cos(x)*sin(y)) sin(x + y)
>>> e = 2*sin(x)**2 + 2*cos(x)**2 >>> trigsimp(e) 2 >>> trigsimp(log(e)) log(2*sin(x)**2 + 2*cos(x)**2) >>> trigsimp(log(e), deep=True) log(2)
>>> simplify(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4) cos(4*x)/2 + 1/2
The function trigsimp
does essentially the same job as simplify.
If instead of simplifying you want to expand a trig expression,
you should use expand_trig
because the default expand
won't do trig functions:
>>> expand( sin(2*x) ) # = (sin(2*x)).expand() sin(2*x) >>> expand_trig( sin(2*x) ) # = (sin(2*x)).expand(trig=True) 2*sin(x)*cos(x)
The hyperbolic sine and cosine in sympy
are denoted sinh
and cosh
respectively
and sympy
is smart enough to recognize them when simplifying expressions:
>>> simplify( (exp(x)+exp(-x))/2 ) cosh(x) >>> simplify( (exp(x)-exp(-x))/2 ) sinh(x)
Recall that $x=\cosh(\mu)$ and $y=\sinh(\mu)$ are defined as $x$ and $y$ coordinates of a point on the the hyperbola with equation $x^2 - y^2 = 1$ and therefore satisfy the identity $\cosh^2 x - \sinh^2 x =1$:
>>> simplify( cosh(x)**2 - sinh(x)**2 ) 1
>>> from sympy import I, re, im, Abs, arg, conjugate
Ever since Newton, the work “number” has referred to all of the following sets: the naturals $\mathbb{N}$, the integers $\mathbb{Z}$, the rationals $\mathbb{Q}$, and the real numbers $\mathbb{R}$. Each set of numbers is associated with a different class of equations. The natural numbers $\mathbb{N}$ appear as solutions of the equation $m+n=x$, where $m$ and $n$ are natural numbers (denoted $m,n \in \mathbb{N}$). The integers $\mathbb{Z}$ are the solutions to equations of the form $x+m=~n$, where $m,n \in \mathbb{N}$. The rational numbers $\mathbb{Q}$ are necessary to solve for $x$ in $mx=n$, with $m,n \in \mathbb{Z}$. To find the solutions of $x^2=2$, we need the real numbers $\mathbb{R}$. New types of numbers are requited for solving more complicated types of equations.
There are no real solutions to the quadratic equaiton $x^2=-1$,
but if we define an imaginary number $i =\sqrt{-1}$ (denoted I
in sympy
) that satisfies this equation:
>>> solve( x**2 + 1 , x) [-I, I]
The solutions are $x=-i$ and $x=i$, and indeed we can verify that $i^2+1=0$ and $(-i)^2+1=0$.
The set of complex numbers $\mathbb{C}$ is defined as follows $\{ a+bi \vert a,b \in \mathbb{R} \}$. Complex numbers have a real part and an imaginary part:
>>> z = 4 + 3*I >>> z 4 + 3*I >>> re(z1) 4 >>> im(z1) 3
Complex numbers can also be expressed in polar form: $z=|z|e^{i\theta}$. For a complex number $z=a+bi$, the quantity $|z|=\sqrt{a^2+b^2}$ is known as the absolute value of $z$, and $\theta$ is called the phase or the argument of $z$:
>>> Abs(z) 5 >>> arg(z) atan(3/4)
The complex conjugate of the number $z=a+bi$ is the number $\bar{z} = a-bi$:
>>> conjugate( z ) 4 - 3*I
Complex conjugation is important for computing the absolute value of $z$: $|z|=\sqrt{ z\bar{z} }$ and for division: $\frac{1}{z} \equiv \frac{\bar{z}}{|z|^2}$.
>>> from sympy import expand, rewrite
Euler's formula shows a important relation
between the exponential function $e^x$ and the trigonometric functions $\sin(x)$ and $\cos(x)$:
\[
e^{ix} = \cos x + i \sin x.
\]
To obtain this result in sympy
, you must specify that the number $x$ is real
and also tell expand
that you're interested in complex expansions:
>>> x = symbols('x', real=True) >>> exp(I*x).expand(complex=True) cos(x) + I*sin(x) >>> re( exp(I*x) ) cos(x) >>> im( exp(I*x) ) sin(x)
So basically, $\cos(x)$ is the real part of $e^{ix}$, and $\sin(x)$ is the imaginay part of $e^{ix}$. Whaaat? I know it's weird, but weird things are to be expected when one inputs imaginary values into a funciton!
Euler's identity is Euler's formula with $x=\pi$:
$e^{i\pi} = \cos \pi + i\sin \pi = - 1$,
which we can rewrite as $e^{i\pi} + 1=0$.
Let's check that sympy
knows about Euler's identity:
>>> (exp(I*pi) + 1).expand(complex=True) 0
Euler's formula is often used to rewrite the functions $\sin$ and $\cos$ in terms of complex exponentials. For example,
>>> (cos(x)).rewrite(exp) exp(I*x)/2 + exp(-I*x)/2
The oprations of calculus are used to descibe the limit behaviour of functions,
calculate the rates of chage,
calculate areas and volumes, etc.
In this section we'll learn about the sympy
functions that correspond to the operations of calculus:
limits, derivatives, integrals, sequences, and series.
from sympy import oo
The infinity symbol in sympy is denoted oo
(two lowercase o's) in sympy
.
Infiniy is not a number but a process: the process of counting forever.
Thus, $\infty + 1 = \infty$,
$\infty$ is greater than any finite number,
and $1/\infty$ is an infinitely small number.
Sympy knows how to correctly treat infiniy in expressions:
>>> oo+1 oo >>> oo > 5000 True >>> 1/oo 0
from sympy import limit
We use limits to describe with mathematical precision infinitely large quantities, infinitely small quantities, and precedues with infinitely many of steps.
For example, the number $e=2.71828\ldots$ in defined as the limit $e \equiv \lim_{n\to \infty} \left( 1 + \frac{1}{n}\right)^{n}$:
>>> limit( (1+1/n)**n, n, oo) E
This limit expression desribes the annual growth rate of a loan with interest rate $100\%$, and compoinding performed infinitely frequenctly.
Limits are also useful to describe the behaviour of functions.
Consider for example the function $f(x)=\frac{1}{x}$.
We can see what happens to $f(x)$ as $x$ goes to infitnity,
and near $x=0$ using the limit
command:
>>> limit( 1/x, x, oo) 0 >>> limit( 1/x, x, 0, dir="+") oo >>> limit( 1/x, x, 0, dir="-") -oo
Indeed, as $x$ becomes larger and larger, the fraction $\frac{1}{x}$ becomes smaller ans smaller, and in the limit $\lim_{x\to \infty} \frac{1}{x} = 0$. On the other hand, when $x$ taks on smaller and smaller positive values, the expression $\frac{1}{x}$ becmes infinite: $\lim_{x \to 0^+} \frac{1}{x} = \infty$. When $x$ approaches $0$ from the left, we have $\lim_{x \to 0^-} \frac{1}{x} = -\infty$. If these calculations are not clear to you, you should study the the graph of $f(x)=\frac{1}{x}$.
Here are some other examples of limits
>>> limit(sin(x)/x, x, 0) 1
Limits are also used to define the derivative and the integral operations.
The derivative function, denoted $f'(x)$, $\frac{d}{dx}f(x)$, $\frac{df}{dx}$, or $\frac{dy}{dx}$,
describes the rate of change of the function $f(x)$.
The sympy
function diff
computes the derivative of any expression:
>>> diff(x**3, x) 3*x**2
The differentiation operation knows about the product rule $[f(x)g(x)]^\prime=f^\prime(x)g(x)+f(x)g^\prime(x)$, the chain rule $f(g(x))' = f'(g(x))g'(x)$, and the quatient rule $\left[\frac{f(x)}{g(x)}\right]^\prime = \frac{f'(x)g(x) - f(x)g'(x)}{g(x)^2}$:
>>> diff( x**2*sin(x), x ) 2*x*sin(x) + x**2*cos(x) >>> diff( sin(x**2), x ) cos(x**2)*2*x >>> diff( x**2/sin(x), x ) (2*x*sin(x) - x**2*cos(x))/sin(x)**2
The second derivative of a function can be obtained using the diff(f, x, 2)
:
>>> diff(x**3, x, 2) # same as diff(diff(x**3, x), x) 6*x
Recall the exponential function $f(x)=e^x$ is special because it is equal to its derivative:
>>> diff( exp(x), x) # same as diff( E**x, x) exp(x) # same as E**x
A differential equation is an equation that relates some unknown function $f(x)$
to its derivatives. An example of a differential equation is $f'(x)=f(x)$.
What is the function $f(x)$ which is equal to its derivative?
You can either try to guess what $f(x)$ is or use the dsolve
function:
>>> x = symbols('x') >>> f = symbols('f', cls=Function) >>> dsolve( f(x) - diff(f(x),x), f(x) ) f(x) == C1*exp(x)
We'll discuss dsolve
in more detail in the section on mechanics.
The tangent line to the function $f(x)$ at $x=x_0$ corresponds to the line that passes through the point $(x_0, f(x_0))$ and has the same slope as the function at that point. The tangent line to the function $f(x)$ at the point $x=x_0$ is described by the equation \[ T_1(x) = f(x_0) \ + \ f'(x_0)(x-x_0). \] What is the equation of a the tangent line to $f(x)=\frac{1}{2}x^2$ at $x_0=1$?
>>> f = S('1/2')*x**2 >>> f x**2/2 >>> df = diff(f,x) >>> df x >>> T_1 = f.subs({x:1}) + df.subs({x:1})*(x - 1) >>> T_1 x - 1/2 >>> T_1.subs({x:1}) == f.subs({x:1}) True >>> diff(T_1,x).subs({x:1}) == diff(f,x).subs({x:1}) True
The tangent line $T_1(x)$ has the same value as $f(x)$ and the same slope as $f(x)$ at $x=1$.
Optimization is about choosing an input for a function $f(x)$ that results in the best value for $f(x)$. The best value usually means the maximum value (if the function measures something desirable like profits) or the minimum value (if the function describes something undesirable like costs).
The derivative $f'(x)$ encodes the information about the slope of $f(x)$. The second derivative $f^{\prime\prime}(x)$ encodes the information about the curvature of $f(x)$. Positive curvature means the function looks like $x^2$, negative curvature means the function looks like $-x^2$. The critical points of a function $f(x)$ are the solutions to the equation $f'(x)=0$. Each critical point is a candidate to be either a maximum or a minimum of the function.
As an example, let's find the where the maximum of the function $f(x)=x^3-2x^2+x$ occurs on the interval $x \in [0,1]$. You should plot to see what it looks like.
>>> x = Symbol('x') >>> f = x**3-2*x**2+x >>> diff(f, x) 3*x**2 - 4*x + 1 >>> sols = solve( diff(f,x), x) >>> sols [1/3, 1] >>> diff(diff(f,x), x).subs( {x:sols[0]} ) -2 >>> diff(diff(f,x), x).subs( {x:sols[1]} ) 2
The point $x=\frac{1}{3}$ is a local maximum because it is critical point of $f(x)$ where the curvature is negative curvature, meaning $f(x)$ looks like the peak of a mountain. The maximum value of $f(x)$ is $f\!\left(\frac{1}{3}\right)=\frac{4}{27}$. The point $x=1$ is a local minimum because it is a critical point with positive curvature, meaning $f(x)$ looks like the bottom of a valley there.
The \emph{integral} of $f(x)$ corresponds to the computation of the area under the graph of $f(x)$. The area under $f(x)$ between the points $x=a$ and $x=b$ is denoted as follows: \[ A(a,b) = \int_a^b f(x) \: dx. \] The integral function $F(c)$ corresponds to the area calculation as a function of the upper limit of integration: \[ F(c) \equiv \int_0^c \! f(x)\:dx\,. \] The area under $f(x)$ between $x=a$ and $x=b$ is obtained by calculating the change in the integral function: \[ A(a,b) = \int_a^b \! f(x)\:dx = F(b)-F(a). \]
In sympy
we use the integrate(f, x)
to obtain the integral function $F(x) = \int_0^x f(u)\,du$:
>>> integrate(x**3, x) x**4/4 >>> integrate(sin(x), x) -cos(x) >>> integrate(ln(x), x) x*log(x) - x
To compute the area under $f(x)$ between $x=a$ and $x=b$ use integrate(f, (x,a,b))
and sympy
will compute the integral $\int_a^b f(x) \, x$ for you:
>>> integrate(x**3, (x,0,1)) 1/4
Note we can obtain the same result by fist finding the integral function $F(x)$, and then use the formula $A(a,b) = F(b) - F(a)$:
>>> F = integrate(x**3, x) >>> F.subs({x:1}) - F.subs({x:0}) 1/4
Integrals correspond to signed area calculations:
>>> integrate(sin(x), (x, 0, pi)) 2 >>> integrate(sin(x), (x, pi, 2*pi)) -2 >>> integrate(sin(x), (x, 0, 2*pi)) 0
During the first half of its $2\pi$ cycle, the function $\sin(x)$ is above the $x$-axis and has positive area under the curve. During the second half of its cycle (from $x=\pi$ to $x=2\pi$), the $\sin(x)$ is below the $x$-axis so contributes negative area. Draw the graph of $\sin(x)$ to see what is going on.
The integral is the “inverse operation” of the derivative. If you perform the integral operation followed by the derivative operation on some function, you'll obtain the same function: \[ \left(\frac{d}{dx} \circ \int dx \right) f(x) \equiv \frac{d}{dx} \int_c^x f(u)\:du = f(x). \]
>>> f = x**2 >>> F = integrate(f, x) >>> F x**3/3 # + C >>> diff(F,x) x**2
Alternately, if you compute the derivative of a function followed by the integral, you will obtain the original function $f(x)$ (up to a constant): \[ \left( \int dx \circ \frac{d}{dx}\right) f(x) \equiv \int_c^x f'(u)\;du = f(x) + C. \]
>>> f = x**2 >>> df = diff(f,x) >>> df 2*x >>> integrate(df, x) x**2 # + C
The fundamental theorem of calculus is important because it tells us how to solve differential equations. If we have to solve for $f(x)$ in the differential equation $\frac{d}{dx}f(x) = g(x)$, we can take the integral on both sides of the equation to obtain $f(x) = \int g(x)\,dx + C$.
Sequences are functions that take integers as inputs. Instead of continuous inputs $x\in \mathbb{R}$, a sequences take natural numbers $n\in\mathbb{N}$ as inputs. We denote the sequences as $a_n$ instead of the usual function notation $a(n)$.
We define a sequence by specifying an expression for it's $n$th term:
>>> a_n = 1/n >>> b_n = 1/factorial(n)
We can see the value of the $n$th term by subsituting the desrired value of $n$:
>>> a_n.subs({n:5}) 1/5
We can use the python
list comprehention syntaxt [item for item in list]
to print the sequence values for some range of indices:
>>> [ a_n.subs({n:i}) for i in range(0,8) ] [oo, 1, 1/2, 1/3, 1/4, 1/5, 1/6, 1/7] >>> [ b_n.subs({n:i}) for i in range(0,8) ] [1, 1, 1/2, 1/6, 1/24, 1/120, 1/720, 1/5040]
Observe that $a_n$ is not properly defined for $n=0$ since $\frac{1}{0}$ is a division-by-zero error,
so to be precise we should say $a_n$s input space is only the positive naturals $a_n:\mathbb{N}^+ \to \mathbb{R}$.
Observe also how quickly the factorial
function grows: $n!=1\cdot2\cdot3\cdots(n-1)\cdot n$.
We're often interested in calculating the limits of sequences as $n\to \infty$. What happens to the terms in the sequence when $n$ takes on very large values?
>>> limit(a_n, n, oo) 0 >>> limit(b_n, n, oo) 0
Both $a_n$ and $b_n$ converge to $0$ as $n\to\infty$.
Many important math quantities are defined as the limit of some expression. An interesting example to consider is the number $\pi$, which is defined as the area of a circle of radius $1$. We can approximate the area of the unit circle by drawing a many-sided regular polygon around the circle. Splitting the $n$-sided regular polygon into identical triangular splices, we can obtain a formula for its area $A_n$ (see problem~REFref{mathprob:octagon-approx-to-circle}). In the limit as $n\to \infty$, the $n$-sided-polygon approximation to the area of the unit-circle becomes exact:
>>> A_n = n*tan(2*pi/(2*n)) >>> limit(A_n, n, oo) pi
Suppose we're given a sequence $a_n$ and we want to compute the sum of all the values in this sequence. Series are sums of sequences. Summing the values of a sequence $a_n:\mathbb{N}\to \mathbb{R}$ is analogous to taking the integral of a funciton $f:\mathbb{R}\to \mathbb{R}$.
In sympy
we can use the summation
notation whose syntax is analogous to the intagrate
command:
>>> a_n = 1/n >>> b_n = 1/factorial(n) >>> summation(a_n, [n, 1, oo]) oo >>> summation(b_n, [n, 0, oo]) E
We say the series $\sum a_n$ diverges (or is divergent) while the series $\sum b_n$ converges (or is convergent). As we sum together more and more terms of the sequence $b_n$, the total becomes closer and closer to some finite number. In this case, the infinite sum $\sum_{n=0}^\infty \frac{1}{n!}$ converges to the number $e=2.71828\ldots$.
Using the summation
command is useful because is allows us to compute infinite summations,
but for most practical applications we don't need to take an infinite number of terms in a series to obtain a good approximation.
Indeed, this is what is so neat about series: they represent a great way to obtain approxmations.
Using the standard python
tools, we can obtain an approxmation to $e$ that is accurate to six decimals as follows:
>>> import math >>> def b_nf(n): return 1.0/math.factorial(n) >>> sum([b_nf(n) for n in range(0,10)] ) 2.718281 52557319 >>> E.evalf() 2.718281 82845905 # true value
But wait there is more! Not only can series be used to approximate numbers, series can also be used to approximate functions.
A power series is a series whose terms contain different powers of the variable $x$. The $n$th term in a a power series is a function of both the sequence index $n$ and the input variable $x$.
For example the power series of the function $f(x)=\exp(x)=e^x$ is \[ \exp(x) \equiv 1 + x + \frac{x^2}{2} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots = \sum_{n=0}^\infty \frac{x^n}{n!}. \] This is, IMHO, one of the most important ideas in calculus: you can compute the value of $\exp(5)$ by taking the infinite sum of the terms in the power series with $x=5$:
>>> exp_xn = x**n/factorial(n) >>> summation( exp_xn.subs({x:5}), [n, 0, oo] ).evalf() 148.413159102577 >>> exp(5).evalf() 148.413159102577 # the true value
Note that sympy
is actually smart enough to recognize the infinite series
you're computing corresponds to $\exp(5)$:
>>> summation( exp_xn.subs({x:5}), [n, 0, oo]) exp(5)
That taking as few as 35 terms in the series is sufficient to obtain a good approximation, so series are not some abstract thing for mathematicians but a practical trick you can when you code:
>>> summation( exp_xn.subs({x:5}), [n, 0, 35]).evalf() 148.413159102577 >>> import math # redo using sympy-less python >>> def exp_xnf(x,n): return x**n/math.factorial(n) >>> sum([exp_xnf(5,i) for i in range(0,35)] ).evalf() 148.413159102577
The coefficients in the power series of a function (also known as the Taylor series or Maclaurin series) depend on the value of the higher derivatives of the function. The formula for the $n$th term of the Taylor series of $f(x)$ expanded at $x=0$ is $a_n(x) = \frac{f^{(n)}(0)}{n!}x^n$, where $f^{(n)}(0)$ is the value of the $n$th derivative of $f(x)$ evaluated at $x=0$.
The sympy
function series
is a convenient way to obtain the series of any function.
Calling series(expr,var,at,nterms)
will show you the series of expr
expanded around var
=at
up to nterms
:
>>> series( sin(x), x, 0, 8) x - x**3/6 + x**5/120 - x**7/5040 + O(x**8) >>> series( cos(x), x, 0, 8) 1 - x**2/2 + x**4/24 - x**6/720 + O(x**8) >>> series( sinh(x), x, 0, 8) x + x**3/6 + x**5/120 + x**7/5040 + O(x**8) >>> series( cosh(x), x, 0, 8) 1 + x**2/2 + x**4/24 + x**6/720 + O(x**8)
Note the power series of $\sin$ and $\sinh$ contain only odd powers of $x$ while the power series of $\cos$ and $\cosh$ contain only even powers.
Some functions are not defined at $x=0$, so we expand them at a different value of $x$. For example the power series of $\ln(x)$ expanded at $x=1$ is
>>> series(ln(x), x, 1, 6) # Taylor series of ln(x) at x=1 x - x**2/2 + x**3/3 - x**4/4 + x**5/5 + O(x**6)
In this case the result retuned by sympy
is a bit misleading,
the Taylor series of $\ln(x)$ expanded at $x=1$ actually contains terms of the form $(x-1)^n$:
\[
\ln(x) = (x-1) - \frac{(x-1)^2}{2} + \frac{(x-1)^3}{3} - \frac{(x-1)^4}{4} + \frac{(x-1)^5}{5} + \cdots.
\]
You can verify this is the correct formula by substituting $x=1$.
The sympy
developers have simply chosen not to show this fact,
because when dealing with series in general we're mostly interested in the coefficients.
Instead of expanding $\ln(x)$ around $x=1$, we can obtain an equivalent expression if we expand $\ln(x+1)$ around $x=0$:
>>> series(ln(x+1), x, 0, 6) # Maclaurin series of ln(x+1) x - x**2/2 + x**3/3 - x**4/4 + x**5/5 + O(x**6)
The term Taylor series applies to all series expansions of functions. The term Maclaurin series applies specifically to Taylor series expansions at $x=0$, which are generally easier to work with.
A vector $\vec{v} \in \mathbb{R}^n$ is an $n$-tuple of real numbers. For example, consider a vector that has three components: \[ \vec{v} = (v_1,v_2,v_3) \ \in \ (\mathbb{R},\mathbb{R},\mathbb{R}) \equiv \mathbb{R}^3. \] To specify the vector $\vec{v}$, we specify the values for its three components $v_1$, $v_2$, and $v_3$.
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real numbers with $m$ rows and $n$ columns.
A vector is a special type of matrix; we can think of a vector $\vec{v}\in \mathbb{R}^n$
either as a row vector (a $1\times n$ matrix) or a column vector (an $n \times 1$ matrix).
Because of this equivalence between vectors and matices,
there is no need for a special vector object in sympy
,
and Matrix
objects are used for vectors as well.
Here is how to define a vector in sympy
and a demo of some basic vectors operations:
>>> u = Matrix([4,5,6]) # defines new col. vector u >>> u Matrix([ # col vec = 3x1 matrix [4], [5], [6]]) >>> u.T # use the transpose operation to Matrix([[4, 5, 6]]) # convert a col vec to a row vec >>> u[0] # 0-based indexing for vector entries 4 >>> u.norm() # length of u sqrt(77) >>> uhat = u/u.norm() # unit length vec in dir of u >>> uhat.T [4/sqrt(77), 5/sqrt(77), 6/sqrt(77)] >>> uhat.norm() 1
The dot product of two $3$-vectors $\vec{u}$ and $\vec{v}$ is \[ \vec{u}\cdot\vec{v}\equiv u_xv_x+u_yv_y+u_zv_z \equiv \|\vec{u}\|\|\vec{v}\|\cos(\varphi) \quad \in \mathbb{R}, \] where $\varphi$ is the angle between the two vectors.
>>> u = Matrix([4,5,6]) >>> v = Matrix([-1,1,2]) >>> u.dot(v) 13
We can combine the algebraic and the geometric formulas for the dot product
to obtain the cosine of the angle between the vectors
\[
\cos(\varphi)
= \frac{ \vec{u}\cdot\vec{v} }{ \|\vec{u}\|\|\vec{v}\| }
= \frac{ u_xv_x+u_yv_y+u_zv_z }{ \|\vec{u}\|\|\vec{v}\| },
\]
and use the acos
function to find the measure of the angle:
>>> acos(u.dot(v)/(u.norm()*v.norm())).evalf() 0.921263115666387 # in radians
Just looking at the coordinates of the vectors $\vec{u}$ and $\vec{v}$, it would be difficult to get an intuition of their relative direction, but thanks to the above calculation we know the angle between the vectors is $52.76^\circ$, which means they kind of point in the same direction. Vectors that are at an angle $\varphi=90^\circ$ are called orthogonal, meaning at right angles with each other. Vectors for which $\varphi > 90^\circ$ have a negative dot product.
The notion of “angle between vectors” generalizes to vectors with any number of dimensions. The dot product for $n$-dimensional vectors is: $\vec{u}\cdot\vec{v}=\sum_{i=1}^n u_iv_i$. This means you can talk about “the angle between” 1000-dimensional vectors. That's pretty crazy if you think about it—there is no way we could possibly “visualize” 1000-dimensional vectors, yet given two such vectors we can tell if they point mostly in the same direction, in perpendicular directions, or mostly in opposite directions.
The dot product is commutative $\vec{u}\cdot\vec{v} = \vec{v}\cdot\vec{u}$:
>>> u.dot(v) == v.dot(u) True
Dot products are very useful for computing projections. Assume you have two $\vec{u}$ and $\vec{n}$ and you want to find the component of $\vec{u}$ that points in the $\vec{n}$ direction. The following formula based on the dot product will give you the answer: \[ \Pi_{\vec{n}}( \vec{u} ) = \frac{ \vec{u} \cdot \vec{n} }{ \| \vec{n} \|^2 } \vec{n}. \]
This is how to implement this formula in sympy
:
>>> u = Matrix([4,5,6]) >>> n = Matrix([1,1,1]) >>> (u.dot(n) / n.norm()**2)*n Matrix([5, 5, 5])
In the case where the direction vector $\hat{n}$ is of unit length $\|\hat{n}\| = 1$, the projection formula simplifies to $(\vec{u}\cdot\hat{n})\hat{n}$.
Consider now the plane $P$ defined by the equation $(1,1,1)\cdot[(x,y,z)-(0,0,0)]=0$. A plane is a two dimensional subspace of $\mathbb{R}^3$. We can decompose any vector $\vec{u} \in \mathbb{R}^3$ into two parts $\vec{u}=\vec{v} + \vec{w}$ such that $\vec{v}$ lies inside the plane and $\vec{w}$ is perpendicular to the plane (parallel to $\vec{n}=(1,1,1)$).
To obtain the perpendicular-to-$P$ component of $\vec{u}$, compute the projection of $\vec{u}$ in the direction $\vec{n}$:
>>> w = (u.dot(n) / n.norm()**2)*n Matrix([5, 5, 5])
To obtain the in-the-plane-$P$ component of $\vec{u}$, start with $\vec{u}$ and subtract the perpendicular-to-$P$ part:
>>> v = u - (u.dot(n)/n.norm()**2)*n # same as u - w Matrix([ -1, 0, 1])
You should check on your own that $\vec{v}+\vec{w}=\vec{u}$ as claimed.
The cross product, denoted $\times$, takes two vectors as inputs and produces another vector as the output. The cross products of individual basis elements are defined as follows: \[ \hat{\imath}\times\hat{\jmath} =\hat{k}, \ \ \ \hat{\jmath}\times\hat{k} =\hat{\imath}, \ \ \ \hat{k}\times \hat{\imath}= \hat{\jmath}. \]
Here is how to compute the cross product of two vectors in sympy
:
>>> u = Matrix([4,5,6]) >>> v = Matrix([-1,1,2]) >>> u.cross(v) Matrix([[4, -14, 9]])
The vector $\vec{u}\times \vec{v}$ is orthogonal to both $\vec{u}$ and $\vec{v}$. The norm of the cross product $|\vec{u}\times \vec{v}|$ is proportional to the lengths of the vectors and the sine of the angle between them:
(u.cross(v).norm()/(u.norm()*v.norm())).n() 0.796366206088088 # = sin(0.921..)
The name “cross product” is well suited for this operation, because the entry for each dimension in the cross product is computed by “cross-multiplying” the entries in the other dimensions: \[ \vec{u}\times\vec{v}= \left( u_yv_z-u_zv_y, \ u_zv_x-u_xv_z, \ u_xv_y-u_yv_x \right). \]
By individual symbols for the entries of two vectors,
we can ask sympy
to show us the cross-product formula:
>>> u1,u2,u3 = symbols('u1:4') >>> v1,v2,v3 = symbols('v1:4') >>> Matrix([u1,u2,u3]).cross(Matrix([v1,v2,v3])) [ u2*v3 - u3*v2] [-u1*v3 + u3*v1] [ u1*v2 - u2*v1]
The dot product is anti-commutative $\vec{u}\times\vec{v} = -\vec{v}\times\vec{u}$:
>>> u.cross(v) Matrix([[4, -14, 9]]) >>> v.cross(u) Matrix([[-4, 14, -9]])
Watch out for this: the product of two numbers and the dot product of two vectors are commutative operations. The cross product isn't: $\vec{u}\times\vec{v} \neq \vec{v}\times\vec{u}$.
The module called sympy.physics.mechanics
contains elaborate tools for describing mechanical systems,
reference frames, forces, torques, etc.
These specialized functions are not necessary for a first-year mechanics course though.
The basic sympy
functions like solve
(to solve equations),
and the vector functions are powerful enough to solve most Newtonian mechanics problems.
The net force acting on an object is the sum of all the external forces acting on it.
Let $x(t)$ denote the position of an object, $v(t)\equiv x'(t)$ denote its velocity, and $a(t)\equiv v'(t)$ denote its acceleration. Together $x(t)$, $v(t)$, and $a(t)$ are knowns as the \emph{equations of motion} of the object. The equations of motion are related by the derivative operation: \[ a(t) \overset{\frac{d}{dt} }{\longleftarrow} v(t) \overset{\frac{d}{dt} }{\longleftarrow} x(t). \] Assuming we know the initial position $x_i\equiv x(0)$ and the initial velocity of the object $v_i\equiv v(0)$, we want to find $x(t)$ for all later times. We can do this starting from the dynamics of the problem—the forces acting on the object.
Newton's second law $\vec{F}_{\textrm{net}} = m\vec{a}$ states that a net force $\vec{F}_{\textrm{net}}$ applied on an object of mass $m$ produces acceleration $\vec{a}$. Thus, we can obtain an objects acceleration calculating if we know the net force acting on it. Starting from the knowledge of $a(t)$, we can obtain $v(t)$ by integrating then find $x(t)$ by integrating $v(t)$: \[ a(t) \ \ \ \overset{v_i+ \int\!dt }{\longrightarrow} \ \ \ v(t) \ \ \ \overset{x_i+ \int\!dt }{\longrightarrow} \ \ \ x(t). \] The reasoning follows from from the fundamental theorem of calculus: if $a(t)$ represents the change in $v(t)$, then the total of $a(t)$ accumulated between $t=t_1$ and $t=t_2$ is equal to the total change in $v(t)$ between these times: $\Delta v = v(t_2) - v(t_1)$. Similarly, the integral of $v(t)$ from $t=0$ until $t=\tau$ is equal to $x(\tau) - x(0)$.
Let's analyze the case where the net force on object is constant. A constant force causes a constant acceleation $a = \frac{F}{m} = \textrm{constant}$. The acceleration function is constant over time $a(t)=a$. We find $v(t)$ and $x(t)$ as follows:
»> t, v_i, x_i = symbols('t v_i x_i')
>>> a = Symbol("a") >>> v = v_i + integrate(a, (t, 0,t) ) >>> v a*t + v_i >>> x = x_i + integrate(v, (t, 0,t) ) >>> x a*t**2/2 + v_i*t + x_i
You may remember these equations from your high school mechanics class.
These are known as the uniform accelerated motion (UAVM) equations:
\begin{align}
a(t) = a,
v(t) = v_i + at,
x(t) = x_i + v_it + \frac{1}{2}a^2.
\end{align}
In high school, you probably had to memorize these equations.
Now you know how to derive them yourself starting from first principles.
For the sake of completeness, we'll now derive the fourth UAM equation, which relates the findal velocity to the initial velocity, the displacement, and the acceleration without reference to time:
>>> (v*v).expand() a**2*t**2 + 2*a*t*v_i + v_i**2 >>> ((v*v).expand() - 2*a*x).simplify() -2*a*x_i + v_i**2
The above calculation shows $v_f^2 - 2ax_f = -2ax_i + v_i^2$. After moving the term $2ax_f$ to the other side of the equation, we obtain \begin{align} (x(t))^2 \ = \ x_f^2 = v_i^2 + 2a\Delta x \ = \ v_i^2 + 2a(x_f-x_i). \end{align} The fourth equation is important for practical purposes because allows you to solve physics problems in a time-less manner. It's also important for theoretical purposes since it resembles an energy calculation. Indeed, if you multiply the fourth equation by $\frac{1}{2}m$, you'll obtain conservation of energy equation: $K_f = K_i + mg\Delta h$.
The procedure $a(t) \ \overset{v_i+ \int\!dt }{\longrightarrow} \ v(t) \ \overset{x_i+ \int\!dt }{\longrightarrow} \ x(t)$ can be used to obtain the position function $x(t)$ even when the acceleration is not constant in time. For example, suppose we have $a(t)=\sqrt{k t}$, then the position as a function of time will be
>>> k = Symbol("k") >>> a = sqrt(k*t) >>> v = v_i + integrate(a, (t, 0,t) ) >>> v v_i + 2*(k*t)**(3/2)/(3*k) >>> x = x_i + integrate(v, (t, 0,t) ) >>> x x_i + v_i*t + (4/15)*(k*t)**(5/2)/k**2
Find the position function of an object at time $t =3$[s], if it starts from $x_i=20$[m], with $v_i=10$[m/s] and undergoes a constant acceleration of $a=5$[m/s$^2$].
>>> x_i = 20 # initial position >>> v_i = 10 # initial velocity >>> a = 5 # acceleration (constant during motion) >>> x = x_i + integrate( v_i+integrate(a,(t,0,t)), (t,0,t) ) >>> x 5*t**2/2 + 10*t + 20 >>> x.subs({t:3}) 145/2 # [m]
Do you see what I mean when math and physics knowledge combined with computer skills is like superpowers?
Instead of working with the kinematic equations of motion $x(t)$, $v(t)$, and $a(t)$, we can solve physics problems using energy calculations. A key connection between the world of forces and the world of energy is the concept of potential energy. If you move an object against a conservative force (think raising a ball in the air against the force of gravity), you can think of the work you did as stored as potential energy.
For each force $\vec{F}(x)$ there is a corresponding potential energy function $U_F(x)$. The change in potential energy associated with the force $\vec{F}$ is defined as the negative of the work done by the force during the displacement: $U_F(x) = - W = - \int_{\vec{d}} \vec{F}\cdot d\vec{x}$.
The potential energies associated with the gravitational force $\vec{F}_g = -mg\hat{\jmath}$ and the force of a spring $\vec{F}_s = -k\vec{x}$ are
>>> x, y, z, t = symbols('x y z t') >>> m, g, k, h = symbols('m g k h') >>> F_g = -m*g >>> U_g = - integrate( F_g, (y,0,h) ) >>> U_g g*h*m >>> F_s = -k*x >>> U_s = - integrate( F_s, (x,0,x) ) >>> U_s k*x**2/2
Note the negative sign in the definition of potential, which gets canceled because the dot product $\vec{F}\cdot d\vec{x}$ is negative in both calculations. When the force acts in the opposite direction of the displacement, the work done by the force is negative.
from sympy import Function, dsolve
The force exerted by a spring is given by the formula $F=-kx$.
If the only force acting on a mass $m$ is the force of a spring,
we can use Newton's second law to obtain the following equation
\[
F=ma
\quad \Rightarrow \quad
-kx = ma
\quad \Rightarrow \quad
-kx(t) = m\frac{d^2}{dt^2}\Big[x(t)\Big].
\]
The motion of a mass-spring system is described by the differential equation $\frac{d^2}{dt^2}x(t) + \omega^2 x(t)=0$,
where the constant $\omega = \sqrt{\frac{k}{m}}$ is the angular frequency.
We can find the position function $x(t)$ using the dsolve
method:
>>> t = Symbol('t') # time t >>> x = Function('x') # position function x(t) >>> w = Symbol('w', positive=True) # angular frequency w >>> sol = dsolve( diff(x(t),t,t) + w**2*x(t), x(t) ) >>> sol x(t) == C1*sin(w*t) + C2*cos(w*t) >>> x = sol.rhs >>> x C1*sin(w*t) + C2*cos(w*t)
Note the solution $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ is equivalent to $x(t) = A\cos(\omega t + \phi)$,
which is more commonly used to describe simple harmonic motion.
We can use the expand
function with the argument trig=True
to convince ourselves:
>>> A, phi = symbols("A phi") >>> (A*cos(w*t - phi)).expand(trig=True) A*sin(phi)*sin(w*t) + A*cos(phi)*cos(w*t)
If we define $C_1=A\sin(\phi)$ and $C_2=A\cos(\phi)$,
we obtain the form $x(t)=C_1\sin(\omega t)+C_2 \cos(\omega t)$ that sympy
found.
We can also verify that the total energy of the mass-spring system is conserved: $E_T(t) = U_s(t) + K(t) = \textrm{constant}$:
>>> x = sol.rhs.subs({"C1":0,"C2":A}) >>> x A*cos(t*w) >>> v = diff(x, t) -A*w*sin(t*w) >>> E_T = (0.5*k*x**2 + 0.5*m*v**2).simplify() >>> E_T 0.5*A**2*(k*cos(t*w)**2 + m*w**2*sin(t*w)**2) >>> E_T.subs({k:m*w**2}).simplify() 0.5*m*(w*A)**2 # = K_max >>> E_T.subs({w:sqrt(k/m)}).simplify() 0.5*k*A**2 # = U_max
from sympy import Matrix
A matrix $A \in \mathbb{R}^{m\times n}$ is a rectangular array of real numbers with $m$ rows and $n$ columns. To specify a matrix $A$, we specify the values for its $mn$ components $a_{11}, a_{12}, \ldots, a_{mn}$.
>>> A = Matrix( [[2,-3,-8, 7], [-2,-1,2,-7], [1 ,0,-3, 6]])
Use the square brackets to access the matrix elements or to obtain a submatrix:
>>> A[0,1] -3 >>> A[0:2, 0:3] [ 2, -3, -8] [-2, -1, 2]
Some commonly used matrices can be created with shortcut methods:
>>> eye(2) [1, 0] [0, 1] >>> zeros((2, 3)) [0, 0, 0] [0, 0, 0]
The transpose
operation flips the matrix through its diagonal:
>>> A.transpose() # the same as A.T [ 2, -2, 1] [-3, -1, 0] [-8, 2, -3] [ 7, -7, 6]
Recall that the transpose is used to convert row vectors into column vectors and vice versa.
The Gauss–Jordan elimination procedure is a sequence of row operations you can perform
on any matrix to bring it to its reduced row echelon form (RREF).
In sympy
, matrices have a rref
method that computes the RREF:
>>> A = Matrix( [[2,-3,-8, 7], [-2,-1,2,-7], [1 ,0,-3, 6]]) >>> A.rref() ([1, 0, 0, 0] # RREF of A [0, 1, 0, 3] # locations of pivots [0, 0, 1, -2], [0, 1, 2] )
Note the rref()
method returns a tuple of values,
the first value is the RREF of $A$,
while the second tells you the indices of the leading ones (also known as pivots) in the RREF of $A$.
To get just the RREF of $A$, select the 0th entry form the tuple:
>>> Arref = A.rref()[0] >>> Arref [1, 0, 0, 0] [0, 1, 0, 3] [0, 0, 1, -2]
The fundamental spaces of a matrix are its column space $\mathcal{C}(A)$, its null space $\mathcal{N}(A)$, and its row space $\mathcal{R}(A)$. These vector spaces are important when you consider the matrix product $A\vec{x}=\vec{y}$ as “applying” the linear transformation $T_A:\mathbb{R}^4 \to \mathbb{R}^3$ to an input vector $\vec{x} \in \mathbb{R}^4$ to produce the output vector $\vec{y} \in \mathbb{R}^3$.
The set of linear transformations $T_A:\mathbb{R}^n \to \mathbb{R}^m$ (vector functions) is equivalent to the set of $m\times n$ matrices. This is one of the fundamental ideas in linear algebra. You can think of $T_A$ as the abstract description of the transformation and $A \in \mathbb{R}^{m\times n}$ as a concrete implementation of $T_A$. By this equivalence, the fundamental spaces of a matrix $A$ tell us facts about the domain and image of the linear transformation $T_A$. The columns space $\mathcal{C}(A)$ is the same as the image space space $\textrm{Im}(T_A)$ (the set of all possible outputs). The null space $\mathcal{N}(A)$ is the same as the null space $\textrm{Null}(T_A)$ (the set of inputs that $T_A$ maps to the zero vector). The row space $\mathcal{R}(A)$ is the orthogonal complement of the null space. Input vectors in the row space of $A$ are in one-to-one correspondence with the output vectors in the column space of $A$.
Okay, enough theory! Let's see how to compute the fundamental spaces of the matrix $A$ defined above. The non-zero rows in the reduced row echelon form of $A$ form a basis for its row space:
>>> [ A.rref()[0][r,:] for r in A.rref()[1] ] # R(A) [ [1, 0, 0, 0], [0, 1, 0, 3], [0, 0, 1, -2] ]
The column space of $A$ is the span of the columns of $A$ that contain the pivots in the reduced row echelon form of $A$:
>>> [ A[:,c] for c in A.rref()[1] ] # C(A) [ [ 2] [-3] [-8] [-2], [-1], [ 2] [ 1] [ 0] [-3] ]
Note we took the columns form the original matrix $A$ and not its RREF.
To find the null space of $A$ call its nullspace
method:
>>> A.nullspace() # N(A) [ [0, -3, 2, 1] ]
The determinant of a matrix, denoted $\det(A)$ or $|A|$, is a particular way to multiply the entries of the matrix to produce a single number.
>>> M = Matrix( [[1, 2, 3], [2,-2, 4], [2, 2, 5]] ) >>> M.det() 2
We use determinants for all kinds of tasks: to compute areas and volumes, to solve systems of equations, to check whether a matrix is invertible or not, etc.
For every invertible matrix $A$, there exists an inverse matrix $A^{-1}$ which undoes the effect of $A$. The cumulative effect of the product of $A$ and $A^{-1}$ in any order is the identity matrix: $AA^{-1}= A^{-1}A=\mathbbm{1}$.
>>> A = Matrix( [[1,2], [3,9]] ) >>> A.inv() [ 3, -2/3] [-1, 1/3] >>> A.inv()*A [1, 0] [0, 1] >>> A*A.inv() [1, 0] [0, 1]
The fundamental eigenvalue equation is \[ A\vec{e}_\lambda =\lambda\vec{e}_\lambda, \] where $\lambda$ is an eigenvalue of $A$ and $\vec{e}_\lambda$ is the corresponding eigenvector. When we multiply $A$ on of its eigenvectors $\vec{e}_\lambda$, the result is the same vector scaled by the constant $\lambda$.
The matrix $A$ can be written as the product of three matrices: $A=Q\Lambda Q^{-1}$. This is called the eigendecomposition of $A$. The matrix $Q$ contains the eigenvectors of $A$ as columns. The matrix $\Lambda$ contains the eigenvalues of $A$ on its diagonal.
To diagonalize a matrix $A$ is to find its $Q$ and $\Lambda$ matrices.
>>> A = Matrix([ [ 9, -2], [-2, 6]]) >>> A.eigenvals() {5: 1, 10: 1} # eigenvalues 5 and 10 with multiplicity 1 >>> A.eigenvects() [(5, 1, [ 1] [ 2] ), (10, 1, [-2] [ 1] )] >>> Q, Λ = A.diagonalize() >>> Q # the matrix of eigenvectors [1, -2] # as columns [2, 1] >>> Q.inv() [ 1/5, 2/5] [-2/5, 1/5] >>> Λ # the matrix of eigenvalues [5, 0] [0, 10] >>> Q*Λ*Q.inv() # eigendecomposition of A [ 9, -2] [-2, 6] >>> Q.inv()*A*Q # obtain Λ from A and Q [5, 0] [0, 10]
Not all matrices are diagonalizable.
You can check if a matrix is diagonalizable by calling its is_diagonalizable
method:
>>> A.is_diagonalizable() True >>> B = Matrix([[ 1, 3], [ 0, 1]]) >>> B.is_diagonalizable() False >>> B.eigenvals() {1: 2} >>> B.eigenvects() [(1, 2, [1] [0] )]
The matrix $B$ is not diagonalizable because it doesn't have a full set of eigenvectors. To diagonalize a $2\times 2$ matrix, we need two orthogonal eigenvectors but $B$ has only a single eigenvector. Therefore, we can't construct the matrix of eigenvectors $Q$ (we're missing a column!) and so $B$ is not diagonalizable.
Non-square matrices do not have eigenvectors and therefore don't have an eigendecomposition.
Instead, we can use the singular value decomposition to brake up a non-square matrix $A$ into
left singular vectors,
right singular vectors,
and a diagonal matrix of of singular values.
Use the singular_values
method on any matrix to find its singular values.
It is possible to write a matrix $A$ as the product of an orthogonal matrix $Q$ and an upper triangular matrix $R$. This is known as the QR-decomposition.
>>> A=Matrix([[12,-51,4], [6,167,-68], [-4,24,-41] ]) >>> Q,R = A.QRdecomposition() >>> Q [ 6/7, -69/175, -58/175] [ 3/7, 158/175, 6/175] [-2/7, 6/35, -33/35] >>> Q*Q.T [1, 0, 0] [0, 1, 0] # Q is orthogonal [0, 0, 1] >>> R [14, 21, -14] [ 0, 175, -70] # R is upper triangular [ 0, 0, 35] >>> Q*R [12, -51, 4] [ 6, 167, -68] [-4, 24, -41]
Each sumpy
matrix is also equipped with
''LUdecomposition''
and ''cholesky'' decomposition methods.
I would like to conclude with some words of caution about the overuse of computers.
Computer technology is very powerful and is everywhere around us,
but let's not forget that computers are actually very dumb:
computers are mere calculators and they depend on your knowledge to direct them.
It's important that you know how to do complicated math by hand in order to be
able to instruct computers to do math for you and to check results of your computer calculations.
I don't want you to use the tricks you leaned in this tutorial to avoid math problems from now on
and simply rely blindly on sympy
for all your math needs.
I want both you and the computer to become math powerhouses!
The computer will help you with tedious calculations (they're good at that)
and you'll helping the computer by fixing when it gets stuck (humans are good at that).
The coverage of the math and physics topics in this tutorial were not sufficient
to do justice to the subjects. The goal here is to quickly introduce you to the
useful sympy
commands.
you should consider some of my other tutorials on mechanics If you're interested in learning more about calculus and mechanics, you should consider the No bullshit guide to math and physics—a short textbook like no other. The book starts with an introductory chapter that reviews high school math concepts in order to make math and physics ideas accessible to everyone.
[ The official sympy tutorial ]
http://docs.sympy.org/0.7.2/tutorial.html
[ A list of python gotchas ]
http://docs.sympy.org/dev/gotchas.html
[ Numeric matrix manipulation ]
http://sebastianraschka.com/Articles/2014_matrix_cheatsheet.html
http://sebastianraschka.com/Articles/2014_matrix_cheatsheet_table.html
Math with integers only? Math in BASH!!
let a=1 let a+=2 # a is 3 now let b=a**2 # b is 9
[ Math prerequisites for CS ]
http://i308.wikispaces.com/Mathematical+prerequisites
[ A list of frameworks ]
https://github.com/josephmisiti/awesome-machine-learning
In supervised learning, the machine infers a function from a set of training examples. In unsupervised learning the machine tries to find hidden structure in unlabeled data.
[ Curriculum through online courses ]
http://richardminerich.com/2012/12/my-education-in-machine-learning-via-cousera/
[ List of freely available books ]
http://metaoptimize.com/qa/questions/186/good-freely-available-textbooks-on-machine-learning
http://efytimes.com/e1/fullnews.asp?edid=121516
https://news.ycombinator.com/item?id=7120391
[ Recommendations ]
http://conductrics.com/data-science-resources/
[ SVD old-school video ]
http://www.youtube.com/watch?v=R9UoFyqJca8
[ Guide to online Data Science courses ]
http://www.bigdatarepublic.com/author.asp?section_id=2809&doc_id=257527&
[ More book recommendations from Quora ]
http://www.quora.com/What-are-some-good-resources-for-learning-about-machine-learning-Why
[ Collection of video lectures ]
http://work.caltech.edu/library/
[ Intro book on data mining ]
http://guidetodatamining.com/
http://www.dmi.usherb.ca/~larocheh/mlpython/tutorial.html#tutorial
http://info.usherbrooke.ca/hlarochelle/cours/ift725_A2013/contenu.html
video lectures
http://homepages.inf.ed.ac.uk/vlavrenk/iaml.html
http://videolectures.net/mlss09uk_cambridge/
Data science curriculum
http://www.mysliderule.com/learning-paths/data-analysis/learn/
An Introduction to Data Mining
http://saedsayad.com/
Introduction to Statistical Thinking (With R, Without Calculus)
http://pluto.huji.ac.il/~msby/StatThink/
Try R code school
http://tryr.codeschool.com/
Interactive R educaitonal packaage
http://swirlstats.com/students.html
Advanced R programming
http://adv-r.had.co.nz/
https://github.com/hadley/adv-r
language quirks
https://github.com/tdsmith/aRrgh
[ 1h tutorial ] http://alyssafrazee.com/introducing-R.html
[ video intro ] http://medianetwork.oracle.com/video/player/2623621262001
http://www.theswarmlab.com/r-vs-python-round-1/
[ super hero themed tutorial ]
http://cierareports.org/blog/2013/10/18/rCourse2013/index.html
[ One page R ] http://onepager.togaware.com/
You can query data in a database (ask it questions). You can look up data from a database relatively rapidly. You can relate data from two different tables together using JOINs. You can create meaningful reports from data in a database. Your data has a built-in structure to it. Information of a given type is always stored only once. Databases are ACID. Databases are fault-tolerant. Databases can handle very large data sets. Databases are concurrent; multiple users can use them at the same time without corrupting the data. Databases scale well.
via http://programmers.stackexchange.com/questions/190482/why-use-a-database-instead-of-just-saving-your-data-to-disk
[ Venn diarams for JOIN operation ]
http://www.codinghorror.com/blog/2007/10/a-visual-explanation-of-sql-joins.html
[ puzzles to try ]
http://blogs.lessthandot.com/index.php/ITProfessionals/EthicsIT/collection-of-puzzles-for-programmers
Become a better programmer with programming challenges
http://macgyverdev.blogspot.se/2014/04/become-better-programmer-with.html
http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=3
[ Lots of tutorials on sysadmin and other backend technology ]
https://www.digitalocean.com/community
[ What should a self-taught programmer study? ]
https://news.ycombinator.com/item?id=7612783
[ Various ressources for further reading ]
http://maxehmookau.github.io/blog/2014/04/28/the-graduate-software-developers-reading-lis/