Shawn Zhong

Shawn Zhong

钟万祥
  • Tutorials
  • Mathematics
    • Math 240
    • Math 375
    • Math 431
    • Math 514
    • Math 521
    • Math 541
    • Math 632
    • Abstract Algebra
    • Linear Algebra
    • Category Theory
  • Computer Sciences
    • CS/ECE 252
    • CS/ECE 352
    • Learn Haskell
  • Projects
    • 2048 Game
    • HiMCM 2016
    • 登峰杯 MCM
  • Course Notes
    • AP Microecon
    • AP Macroecon
    • AP Statistics
    • AP Chemistry
    • AP Physics E&M
    • AP Physics Mech
    • CLEP Psycho

Shawn Zhong

钟万祥
  • Tutorials
  • Mathematics
    • Math 240
    • Math 375
    • Math 431
    • Math 514
    • Math 521
    • Math 541
    • Math 632
    • Abstract Algebra
    • Linear Algebra
    • Category Theory
  • Computer Sciences
    • CS/ECE 252
    • CS/ECE 352
    • Learn Haskell
  • Projects
    • 2048 Game
    • HiMCM 2016
    • 登峰杯 MCM
  • Course Notes
    • AP Microecon
    • AP Macroecon
    • AP Statistics
    • AP Chemistry
    • AP Physics E&M
    • AP Physics Mech
    • CLEP Psycho

Home / 2018 / February

3.3 Complexity of Algorithms

  • Feb 28, 2018
  • Shawn
  • Math 240
  • No comments yet
The Complexity of Algorithms • Given an algorithm, how efficient is this algorithm for solving a problem given input of a particular size? • To answer this question, we ask: ○ How much time does this algorithm use to solve a problem? ○ How much computer memory does this algorithm use to solve a problem? • When we analyze the time the algorithm uses to solve the problem given input of a particular size, we are studying the time complexity of the algorithm. • When we analyze the computer memory the algorithm uses to solve the problem given input of a particular size, we are studying the space complexity of the algorithm. • In this course, we focus on time complexity. • The space complexity of algorithms is studied in later courses. • We will measure time complexity in terms of the number of operations an algorithm uses and we will use big-O and big-Theta notation to estimate the time complexity. • We can use this analysis to see whether it is practical to use this algorithm to solve problems with input of a particular size. • We can also compare the efficiency of different algorithms for solving the same problem. • We ignore implementation details because it is extremely complicated to consider them. Time Complexity • To analyze the time complexity of algorithms, we determine the number of operations, such as comparisons and arithmetic operations (addition, multiplication, etc.). • We can estimate the time a computer may actually use to solve a problem using the amount of time required to do basic operations. • We ignore minor details, such as the “house keeping” aspects of the algorithm. • We will focus on the worst-case time complexity of an algorithm. • This provides an upper bound on the number of operations an algorithm uses to solve a problem with input of a particular size. • It is usually much more difficult to determine the average case time complexity of an algorithm. • This is the average number of operations an algorithm uses to solve a problem over all inputs of a particular size. Complexity Analysis of Algorithms • Example: Describe the time complexity of the algorithm for finding the maximum element in a finite sequence. ○ Count the number of comparisons. ○ The max a_i comparison is made n − 1 times. ○ Each time i is incremented, a test is made to see if i≤n. ○ One last comparison determines that i>n. ○ Exactly 2(n−1)+1=2n−1 comparisons are made. ○ Hence, the time complexity of the algorithm is Θ(n). • Example: Determine the time complexity of the linear search algorithm. ○ Count the number of comparisons. ○ At each step two comparisons are made; i≤n and x≠a_i . ○ To end the loop, one comparison i≤n is made. ○ After the loop, one more i≤n comparison is made. ○ If x=a_i, 2i+1 comparisons are used. ○ If x is not on the list, 2n+1 comparisons are made and then an additional comparison is used to exit the loop. ○ So, in the worst case 2n+2 comparisons are made. ○ Hence, the complexity is Θ(n). • Example: Describe the average case performance of the linear search algorithm. ○ Assume the element is in the list and that the possible positions are equally likely. ○ By the argument on the previous slide, if x=a_i , the number of comparisons is 2i+1 ○ (3+5+7+…+(2n+1))/n=(2(1+2+3+…+n)+n)/n=2[n(n+1)/2]/n+1=n+2 ○ Hence, the average-case complexity of linear search is Θ(n). • Example: Describe the time complexity of binary search in terms of the number of comparisons used. ○ Assume (for simplicity) n = 2^k elements. Note that k=log⁡n. ○ Two comparisons are made at each stage; i<j, and x>a_m. ○ At the first iteration the size of the list is 2k and after the first iteration it is 2k−1. ○ Then 2k−2 and so on until the size of the list is 2^1=2. ○ At the last step, a comparison tells us that the size of the list is the size is 2^0=1 and the element is compared with the single remaining element. ○ Hence, at most 2k+2=2 log⁡n+2 comparisons are made. ○ Therefore, the time complexity is Θ (log⁡n), better than linear search. • Example: What is the worst-case complexity of bubble sort in terms of the number of comparisons made? ○ A sequence of n−1 passes is made through the list. ○ On each pass n−i comparisons are made. ○ (n−1)+(n−2)+…+2+1=n(n−1)/2=1/2 n^2−1/2 n ○ The worst-case complexity of bubble sort is Θ(n^2) • Example: What is the worst-case complexity of insertion sort in terms of the number of comparisons made? ○ The total number of comparisons are ○ 2+3+…+n=n(n−1)/2−1 ○ Therefore the complexity is Θ(n^2) • Example: How many additions of integers and multiplications of integers are used by the matrix multiplication algorithm to multiply two n×n matrices. ○ There are n^2 entries in the product. ○ Finding each entry requires n multiplications and n−1 additions. ○ Hence, n^3 multiplications and n^2 (n−1) additions are used. ○ Hence, the complexity of matrix multiplication is O(n^3). Understanding the Complexity of Algorithms Complexity of Problems • Tractable Problem ○ There exists a polynomial time algorithm to solve this problem. ○ These problems are said to belong to the Class P. • Intractable Problem ○ There does not exist a polynomial time algorithm to solve this problem • Unsolvable Problem ○ No algorithm exists to solve this problem, e.g., halting problem. • Class NP ○ Solution can be checked in polynomial time. ○ But no polynomial time algorithm has been found for finding a solution to problems in this class. • NP Complete Class ○ If you find a polynomial time algorithm for one member of the class, ○ it can be used to solve all the problems in the class. P Versus NP Problem • The P versus NP problem asks whether the class P = NP? • Are there problems whose solutions can be checked in polynomial time, but can not be solved in polynomial time? • Note that just because no one has found a polynomial time algorithm is different from showing that the problem can not be solved by a polynomial time algorithm. • If a polynomial time algorithm for any of the problems in the NP complete class were found, then that algorithm could be used to obtain a polynomial time algorithm for every problem in the NP complete class. • Satisfiability (in Section 1.3) is an NP complete problem. • It is generally believed that P≠NP since no one has been able to find a polynomial time algorithm for any of the problems in the NP complete class. • The problem of P versus NP remains one of the most famous unsolved problems in mathematics (including theoretical computer science). • The Clay Mathematics Institute has offered a prize of $1,000,000 for a solution.
Read More >>

3.2 The Growth of Functions

  • Feb 27, 2018
  • Shawn
  • Math 240
  • No comments yet
The Growth of Functions • In both computer science and in mathematics, there are many times when we care about how fast a function grows. • In computer science, we want to understand how quickly an algorithm can solve a problem as the size of the input grows. ○ We can compare the efficiency of two different algorithms for solving the same problem. ○ We can also determine whether it is practical to use a particular algorithm as the input grows. ○ We’ll study these questions in Section 3.3. • Two of the areas of mathematics where questions about the growth of functions are studied are: ○ number theory (covered in Chapter 4) ○ combinatorics (covered in Chapters 6 and 8) Big-O Notation • Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. • We say that f(x) is O(g(x)) if there are constants C and k such that ○ |f(x)|≤C|g(x)| whenever xk. • This is read as “f(x) is big-O of g(x)” or “g asymptotically dominates f.” • The constants C and k are called witnesses to the relationship f(x) is O(g(x)). • Only one pair of witnesses is needed. Some Important Points about Big-O Notation • If one pair of witnesses is found, then there are infinitely many pairs. • We can always make the k or the C larger and still maintain the inequality |f(x)|≤C|g(x)| . • If f(x) is O(g(x)) and g(x) is O(hx)) then f(x) is O(hx)) • You may see “ f(x)=O(g(x))” instead of “ f(x) is O(g(x)).” • But this is an abuse of the equals sign since the meaning is that there is an inequality relating the values of f and g, for sufficiently large values of x. • It is ok to write f(x)∈O(g(x)), because O(g(x)) represents the set of functions that are O(g(x)). • Usually, we will drop the absolute value sign since we will always deal with functions that take on positive values. Using the Definition of Big-O Notation • Example: Show that f(x)=x^2+2x+1 is O(x^2 ). ○ Since when x1, xx^2 and 1x^2 ○ 0≤x^2+2x+1≤x^2+2x^2+x^2−4x^2 ○ Can take C = 4 and k = 1 as witnesses to show that f(x) is O(x^2 ) ○ Alternatively, when x2, we have 2x≤x^2 and 1x^2. ○ Hence, 0≤x^2+2x+1≤x^2+x^2+x^2=3x^2 when x2. ○ Can take C = 3 and k = 2 as witnesses instead. • Example: Show that 7x^2 is O(x^3) ○ When x7, 7x^2x^3. ○ Take C =1 and k = 7 as witnesses to establish that 7x^2 is O(x^3). ○ (Would C = 7 and k = 1 work?) Yes • Example: Show that n^2 is not O(n) ○ Suppose there are constants C and k for which n^2≤Cn, whenever nk. ○ Then n≤ C must hold for all nk. A contradiction! Big-O Estimates for Polynomials • Let f(x)=a_n x^n+a_(n−1) x^(n−1)+…+a_1 x+a_0 where a_0,a_1,…a_n are real numbers with a_n≠0 • Then f(x) is O(x_n). • |f(x)| = |a_n x^n + a_(n−1) x^(n−1)+…+a_1 x+a_0 | • ≤ |an|xn + |an−1| xn−1 + ∙∙∙ + |a1|x1 + |a0| • = x_n (|a_n |+|a_(n−1) |/x +…+|a1|/x_(n−1) +|a_0 |/x_n ) • ≤ x_n (|a_n |+ |a_(n−1) |+…+ |a_1 |+ |a_0 |) • Take C=|a_n |+|a_(n−1) |+…+ |a_0 | and k=1. • Then f(x) is O(x^n). • The leading term a_n x^n of a polynomial dominates its growth. Big-O Estimates for Some Important Functions • Example: Use big-O notation to estimate the sum of the first n positive integers. ○ 1+2+…+n≤n+n+…+n=n^2 ○ 1+2+…+n is O(n^2 ) taking C=1 and k=1 • Example: Use big-O notation to estimate the factorial function f(n)=n!=1×2×…×n ○ n!=1×2×…×n≤n×n×…×n^n ○ n! is O(n^n ) taking C=1 and k=1 • Example: Use big-O notation to estimate log⁡n! ○ Given that n!n^n ○ then log⁡n!≤n⋅log⁡n . ○ Hence, log⁡n! is O(n log⁡n ) taking C = 1 and k = 1. Display of Growth of Functions Combinations of Functions • If f_1 (x) is O(g_1 (x)) and f_2 (x) is O(g_2 (x)) then ○ (f_1+f_2)(x) is O(max⁡{|g_1 (x)|,|g_2 (x)|} ). • If f_1 (x) and f_2 (x) are both O(g(x)) then ○ (f_1+f_2)(x) is O(g(x)). • If f_1 (x) is O(g_1 (x)) and f_2 (x) is O(g_2 (x)) then ○ (f_1 f_2)(x) is O(g_1 (x) g_2 (x)). Big-Omega Notation • Let f and g be functions from the set of integers/real numbers to the set of real numbers. • We say that f(x) is Ω(g(x)) if there are constants C and k such that |f(x)|≥C|g(x)| when xk • We say that “f(x) is big-Omega of g(x).” • Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. • Big-Omega tells us that a function grows at least as fast as another. • f(x) is Ω(g(x)) if and only if g(x) is O(f(x)). • This follows from the definitions. • Example: Show f(x)=8x^3+5x^2+7 is Ω(g(x)) where g(x)=x^3 ○ f(x)=8x^3+5x^2+7≥8x^3 for all positive real numbers x ○ Is it also the case that g(x)=x^3 is O(8x^3+5x^2+7)? Yes Big-Theta Notation • Let f and g be functions from the set of integers/real numbers to the set of real numbers. • The function f(x) is Θ(g(x)) if f(x) is O(g(x)) and f(x) is Ω(g(x)) • We say that ○ “f is big-Theta of g(x)” ○ “f(x) is of order g(x)” ○ “f(x) and g(x) are of the same order.” • f(x) is Θ(g(x)) if and only if there exists constants C_1, C_2 and k such that C_1 g(x)f(x)C_2 g(x) if xk. • This follows from the definitions of big-O and big-Omega. • When f(x) is Θ(g(x)) it must also be the case that g(x) is Θ(f(x)) • Note that f(x) is Θ(g(x)) if and only if it is the case that f(x) is O(g(x)) and g(x) is O(f(x)). • Sometimes writers are careless and write as if big-O notation has the same meaning as big-Theta. Examples of Big-Theta Notation • Example 1: Show that the sum of the first n positive integers is Θ(n^2 ) ○ Let f(n) = 1 + 2 + ∙∙∙ + n. ○ We have already shown that f(n) is O(n^2). ○ To show that f(n) is Ω(n^2), we need a positive constant C such that f(n) Cn^2 for sufficiently large n. ○ Summing only the terms greater than n/2 we obtain the inequality 1 + 2 + ∙∙∙ + n ≥ ⌈n/2⌉+(⌈n/2⌉+1)+…+n ≥ ⌈n/2⌉+⌈n/2⌉+…+⌈n/2⌉ = (n−⌈n/2⌉+1)⌈n/2⌉ ≥ (n/2)(n/2) = n^2/4 ○ Taking C = 1/4, f(n) Cn^2 for all positive integers n. ○ Hence, f(n) is Ω(n^2), and we can conclude that f(n) is Θ(n^2). • Example 2: Show that f(x)=3x^2+8x log⁡x is Θ(x^2) ○ f(x)=3x^2+8x log⁡x≤11x^2 for x 1, ○ since 0 ≤8x log⁡x≤ 8x^2. ○ Hence, 3x^2+8x log⁡x is O(x^2). ○ x^2 is clearly O(3x^2+8x log⁡x) ○ Hence, 3x^2+8x log⁡x is Θ(x^2)
Read More >>

Math 521 - 2/26

  • Feb 27, 2018
  • Shawn
  • Math 521
  • No comments yet
Theorem 2.24 (a) For any collection {G_n } of open sets, ⋃8_α▒G_α is open ○ Suppose G_α is open for all α ○ Let G=⋃8_α▒G_α ○ If x∈G, then x∈G_α for some α ○ Since G_α is open, there is a neighborhood about x in G_α ○ And consequently, the neighborhood about x is also in G ○ Thus G is open (b) For any collection {F_n } of closed sets, ⋂8_α▒F_α is closed ○ Suppose F_α is closed for all α ○ Then F_α^c is open by Theorem 2.23 ○ So ⋃8_α▒F_α^c is open by (a) ○ (⋂8_α▒F_α )^c=⋃8_α▒F_α^c , by De Morgan^′ s Law ○ Thus, (⋂8_α▒F_α )^c is open ○ Therefore ⋂8_α▒F_α is closed by Theorem 2.23 (c) For any finite collection, G_1,G_2,…,G_n of open sets, ⋂8_(i=1)^n▒G_i is also open ○ Suppose G_1,G_2,…,G_n is open ○ Let x∈H=⋂8_(i=1)^n▒G_i ○ So, x∈G_i for 1≤i≤n ○ By definition, since each G_i is open ○ x is contained in a neighborhood N_(r_i ) (x)⊂G_i ○ Let r=min⁡(r_1,r_2,…,r_n ) ○ N_r (x)⊂G_i for 1≤i≤n ○ So, N_r (x)∈H ○ Thus, H=⋂8_(i=1)^n▒G_i is open (d) For any finite collection, F_1,F_2,…,F_n of closed sets, ⋃24_(i=1)^n▒F_i is also closed ○ Suppose F_1,F_2,…,F_n is closed ○ Then F_i^c is open by Theorem 2.23 ○ So ⋂24_(i=1)^n▒F_i^c is open by (c) ○ (⋃24_(i=1)^n▒F_i )^c=⋂24_(i=1)^n▒F_i^c , by De Morgan^′ s Law ○ Thus, (⋃24_(i=1)^n▒F_i )^c is open ○ Therefore ⋃24_(i=1)^n▒F_i is closed by Theorem 2.23 • Note ○ ⋂24_(n=1)^∞▒(−1/n,1/n) ={0} ○ (−1/n,1/n) is open ∀n∈N, while {0} is closed Closure • Let X be a metric space • If E⊂X and E′ denotes the set of limit points of E in X • Then the closure of E is defined to be E ̅=E∪E^′ Theorem 2.27 • If X is a metric space and E⊂X, then • E ̅ is closed ○ Let p∈E ̅^c ○ Then p is neither a point of E nor a limit point of E ○ So there exists a neighborhood N about p that contains no points of E ○ So,N⊂E ̅^c ○ i.e. every point of E ̅^c is an interior point ○ Thus E ̅^c is open ○ Therefore E ̅ is closed • E=E ̅ iff E is closed ○ If E=E ̅, then E is closed ○ If E is closed, E contains its limit points, so E^′⊂E and E=E ̅ • E ̅⊂F for every closed set F⊂X s.t. E⊂F ○ Suppose F is closed and E⊂F ○ F is closed⇒F^′⊂F ○ E⊂F⇒E^′⊂F′⊂F ○ Thus E ̅=E∪E^′⊂F • Intuition: E ̅ is the smallest closed set in X containing E Theorem 2.28 • Statement ○ If E≠∅, E⊂R, and E is bouned above, then sup⁡E∈E ̅ ○ Hence sup⁡E∈E if E is closed • Proof ○ Let y=sup⁡E ○ If y∈E § Clearly y∈E ̅ ○ If y∉E § Let h 0 § Let x∈(y−hy) § Suppose ∄x∈E, then y−h is an upper bound for E § But this contradicts the fact that y=sup⁡E § So there must be some x∈E with y−h x y § Thus, for any neighborhood about y, ∃x∈E in the neighborhood § So y is a limit point of E § i.e. y∈E^′⊂E ̅
Read More >>

Math 541 - 2/26

  • Feb 27, 2018
  • Shawn
  • Math 541
  • No comments yet
Proposition 21: Construction of ⟨A⟩ • Statement ○ If A⊆G, then ⟨A⟩={a_1^(ε_1 ) a_2^(ε_2 )…a_n^(ε_n )│n∈Z( 0),a_i∈A,ε∈{±1} } ○ (when n=0, we get 1) • Proof ○ Denote the right hand side by A ̅ ○ A ̅≠∅, since 1∈A ̅ (take n=0) ○ If a=a_1^(ε_1 ) a_2^(ε_2 )…a_n^(ε_n ),b=b_1^(δ_1 ) b_2^(δ_2 )…b_m^(δ_m )∈A ̅ ○ ab^(−1)=a_1^(ε_1 ) a_2^(ε_2 )…a_n^(ε_n ) b_m^(−δ_1 ) b_(m−1)^(−δ_2 )…b_1^(−δ_m )∈A ̅ ○ Thus A ̅≤G ○ A⊆A ̅, and ⟨A⟩ is the smallest subgroup of G containing A, so ⟨A⟩⊆A ̅ ○ A ̅⊆⟨A⟩, since every subgroup of G containing A (i.e. ⟨A⟩) ○ must contain every finite product of elements of A and their inverses. ○ Therefore ⟨A⟩=A ̅={a_1^(ε_1 ) a_2^(ε_2 )…a_n^(ε_n )│n∈Z( 0),a_i∈A,ε∈{±1} } • Example ○ If G is a group, and g∈G, then ⟨{g}⟩=⟨g⟩ • Note ○ When G is abelian and A⊆G, then ○ ⟨A⟩={a_1^(n_1 )…a_m^(n_m )│n_i∈Za_i∈A,m∈Z(≥0) } Finitely Generated Group • Definition ○ A group G is finitely generated if ○ there is a finite subset A of G s.t. ⟨A⟩=G • Example 1 ○ Cyclic groups are finitely generated • Example 2 ○ Finite groups are finitely generated • Example 3 ○ If G,H are finitely generated, then G×H is also finitedly generated ○ For instance, Z×Z is finitely generated by A={(1,0),(0,1)} ○ In particular, products of cyclic groups are finitely generated ○ Every finitely generated abelian group is a product of cyclic groups ○ (We won t prove this in Math 541) • Example 4 ○ Every finitely generated subgroup of Q is cyclic. ○ It follows that Q is not finitely generated, since Q is not cyclic (QZ ○ Suppose H≤Q, and H=⟨a_1/b_1 ,a_2/b_2 ,…,a_n/b_n ⟩ where a_i,b_i∈Z and b_i≠0 ○ Without loss of generality, assume a_i≠0 ○ Let S≔{x∈Z( 0)│x/(b_1 b_2…b_n )∈H} § S≠∅, since±(a_1 a_2…a_n)/(b_1 b_2…b_n )∈H § Applying the Well-Ordering Principle § Choose a minimum element e∈S ○ Claim:H=⟨e/(b_1 b_2…b_n )⟩ § Notice that H={c_1 a_1/b_1 +c_2 a_2/b_2 +…+c_n a_n/b_n │c_i∈Z § So we only need to check a_i/b_i ∈⟨e/(b_1 b_2…b_n )⟩ ∀i § Let i be fixed § Set z≔b_1…b_(i−1) a_i b_(i+1)…b_n § So a_i/b_i =z/(b_1 b_2…b_n ) § Choose q,r∈Z s.t z=qe+r, 0≤r e § (z−qe)/(b_1 b_2…b_n )∈H⇒r/(b_1 b_2…b_n )∈H § Thus, by the minimality of e, r=0 § This shows e|z § So a_i/b_i =z/(b_1 b_2…b_n )∈⟨e/(b_1 b_2…b_n )⟩ § Therefore H=⟨e/(b_1 b_2…b_n )⟩ ○ So H is cyclic
Read More >>

Math 541 - 2/23

  • Feb 25, 2018
  • Shawn
  • Math 541
  • No comments yet
Theorem 20 • Let G=⟨g⟩ be a cyclic group • Statement (1) ○ Every subgroup of G is cyclic ○ More precisely, if H≤G, then either H={1} or ○ H=⟨g^d ⟩, where d is the smallest positive integer s.t. g^d∈H • Proof (1) ○ Assume H≠{1} ○ Choose a≠0 s.t. g^a∈H, then (g^a )^(−1)=g^(−a)∈H ○ Thus, H contains some positive power of g ○ Let S≔{b∈Z( 0) |g^b∈H}, then S≠∅ ○ By the Well-Ordering Principle, S contains a minimum element d ○ Thus, ⟨g^d ⟩⊆H; we must show H⊆⟨g^d ⟩ ○ Let h∈H, then h=g^a for some a∈Z ○ Choose q,r∈Z s.t. a=qd+r, 0≤r d ○ g^d∈H⇒g^(−qd)∈H⇒g^a g^(−qd)∈H⇒g^r∈H ○ If r 0, then r∈S, which is impossible, since r d ○ Therefore r=0 ○ So g^a=g^qd∈⟨g^d ⟩⇒H⊆⟨g^d ⟩ ○ Therefore H=⟨g^d ⟩ • Statement (2) ○ If G is finite, then for all positive integers a dividing n ○ ∃! subgroup H≤G of order a ○ Moreover, this subgroup is ⟨g^d ⟩, where d=n/a • Proof (2) ○ Let a be a positive divisor of n=|G| ○ Let d≔n/a⇒n/d=a ○ Existence § |⟨g^d ⟩|=n/((d,n) )=n/d=a by Proposition 19 § This proves existence ○ Uniqueness § Without loss of generality, assume a 1 § Suppose H≤G and |H|=a § We must show H=⟨g^d ⟩ § By (1), H=⟨g^b ⟩, where b is the smallest positive integer s.t. g^b∈H § We have n/d=a=|H|=|⟨g^b ⟩|=n/((n,b) ) by Proposition 19 § Thus d=(n,b) i.e. d|b § Thus g^b∈⟨g^d ⟩⇒⟨g^b ⟩≤⟨g^d ⟩ § Since |⟨g^b ⟩|=|⟨g^d ⟩|, ⟨g^b ⟩=⟨g^d ⟩ § i.e. H=⟨g^d ⟩ Subgroups Generated by Subsets of a Group (Section 2.4) • Lemma: If {H_i }_(i∈I) is a family of subgroups of G, then ⋂136_(i∈I)▒〖H_i≤G〗 ○ Let H≔⋂136_(i∈I)▒H_i ○ H≠∅ because 1∈H_i, ∀i∈I ○ Let h1,h2∈H, then h1,h2∈H_i, ∀i∈I ○ ⇒h1 h2^(−1)∈H_i,∀i∈I ○ ⇒h1 h2^(−1)∈H • Definition ○ Let G be a group and A⊆G ○ The subgroup generated by A is ○ the intersection of every subgroup of G containing A ○ ⟨A⟩≔⋂8_█(H≤G@A⊆H)▒H • Example ○ If A=∅, then ⟨A⟩={1} ○ If A={1}, then ⟨A⟩={1}
Read More >>
  • 1
  • 2
  • 3
  • …
  • 7

Search

  • Home Page
  • Tutorials
  • Mathematics
    • Math 240 - Discrete Math
    • Math 375 - Linear Algebra
    • Math 431 - Intro to Probability
    • Math 514 - Numerical Analysis
    • Math 521 - Analysis I
    • Math 541 - Abstract Algebra
    • Math 632 - Stochastic Processes
    • Abstract Algebra @ 万门大学
    • Linear Algebra @ 万门大学
    • Category Theory
  • Computer Sciences
    • CS/ECE 252 - Intro to Computer Engr.
    • CS/ECE 352 - Digital System Fund.
    • Learn Haskell
  • Projects
    • 2048 Game
    • HiMCM 2016
    • 登峰杯 MCM
  • Course Notes
    • AP Macroeconomics
    • AP Microeconomics
    • AP Chemistry
    • AP Statistics
    • AP Physics C: E&M
    • AP Physics C: Mechanics
    • CLEP Psychology

WeChat Account

Categories

  • Notes (418)
    • AP (115)
      • AP Macroeconomics (20)
      • AP Microeconomics (23)
      • AP Physics C E&M (25)
      • AP Physics C Mechanics (28)
      • AP Statistics (19)
    • Computer Sciences (2)
    • Mathematics (300)
      • Abstract Algebra (29)
      • Category Theory (7)
      • Linear Algebra (29)
      • Math 240 (42)
      • Math 375 (71)
      • Math 514 (18)
      • Math 521 (39)
      • Math 541 (39)
      • Math 632 (26)
  • Projects (4)
  • Tutorials (11)

Archives

  • October 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • July 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017

WeChat Account

Links

RobeZH's thoughts on Algorithms - Ziyi Zhang
Copyright © 2018.      
TOP