I’m moving to
because blogger is better ta handling latex than wordpress, and most of the reason I stopped posting here had to do with the fact that using latex here is incredibly annoying.
I’m moving to
because blogger is better ta handling latex than wordpress, and most of the reason I stopped posting here had to do with the fact that using latex here is incredibly annoying.
First, some definitions to add context to the post. Throughout, we will assume that is a set of regular cardinals with no maximum such that .
Definition: For , we say that is a generator for if is the ideal generated from by adding in . In this case, we write . If is a generator for , we write for to indicate this.
The key component of Shelah’s celebrated ZFC bound on is not only that generators exist for every , but that we can manufacture transitive generators for large enough portions of .
Definition: Suppose that , then a sequence of generators is transitive if for every .
The construction of transitive generators from the Abaraham-Magidor chapter of the Handbook uses something called elevated sequences to get transitive generators. While this ends up being quite slick, the construction from Sh430 is more transparent. The basic idea is to shove everything into small elementary submodels, and then try to make your generators transitive. Because this procedure can be carried out in small elementary submodels, we know that this has to stabilize eventually. The only problem now is checking that what we end up with is still a sequence of generators. This is done by defining functions which behave like universal cofinal sequences, and using these functions to then show that the objects we end up with are generators.
Now, the Abraham-Magidor approach turns the above approach on its head. The elevated arrays play the part of functions which look like universal cofinal sequences, and these are used to define transitive generators. This ends up looking nicer, if a bit mystifying. Both proofs share many of the same characteristics though, including the key use of characteristic functions of -approximating sequences of elementary submodels.
As I’m interested in where the Abraham-Magidor approach diverges from Shelah’s approach, I’m going to pick up where this happens.
First let where each $\vec f^\lambda$ is a universal cofinal sequence which is minimally club obedient at cofinality for each regular such that . Now, let be regular with , and fix a -approximating sequence of elementary submodels of some with . That is:
1) Each ;
2) for each , we have that ;
3) for every ;
4) is an initial segment of .
Set . For each and , we define
We can then find a club such that, for each in and , the following hold:
We’re also going to ask that , so . One thing to note here is that I’m probably being overly careful in the sense that we probably don’t need that . I just know for sure that the above works if we give ourselves a little more room, and peeling off a few cardinals at the beginning of won’t effect anything. The above list of results can be derived from the results in Section 5 of the Abraham-Magidor paper. Now we’ve reached the point where the two approaches to transitive generators differ. As I said before, we’re just going to make these generators transitive by brute force, and then prove that the brute force approach didn’t break anything. So we fix in and , and define the following sets by induction on :
The notation might look horrible, but really we’re just starting with the set and attempting to make it transitive at successor stages. At limit stages, we simply take the union. The nice part is that this entire procedure (for fixed ) can be carried out inside which has cardinality . As the above sequence of sets is increasing and continuous in , and is a sequence of length , it follows that this sequence has to stabilize at some point. Let’s call this stabilization point as this depends on all three parameters.
Now, note that , and so there must be a single that works for each . So now we have the following properties of our sequence :
The first gives us transitivity, and the second gives us that the ideal contains . All we have to do now is show that , as that will tell us that won’t add the wrong sets. Along these lines, for each , and for each define an increasing sequence of functions with domain by induction on as follows:
Case 2: Suppose that the above case fails, but such that where is minimal. Then we set .
By construction, we see that this sequence is increasing and that the domains of the functions is as desired. Note that we suppressed any reference to , but this entire construction can be carried out inside . Finally, for every and every , we have that . This follows readily from our construction.
At this point, we’re ready to finish the proof. The idea is that the functions we built look enough like universal cofinal sequences, that we can use them to show we have generators. To this end, we suppose that . This means that we can find some which serves as a -upper bound for , since is directed. The fact that we can find such a in follows from elementarity along with the fact that the sequence ends up in . Here’s the kicker, though: since and , it follows that which is a contradiction. So we have transitive generators.
In Section 6 of Sh410, Shelah defines several related cardinal invariants, one of which makes an appearance in a proof of the revised GCH (the one in the Abraham-Magidor chapter of the handbook). I want to use this space to clear up some of the definitions.
Def: Let be an ideal on some set . Then is -based if, whenever is such that , we also have .
Our working assumptions for this post here are the following:
Def: Say a cardinal is representable if there is a collection of finite subsets of such that, for any , we have .
Def: Say a cardinal is weakly representable if there is a collection of finite subsets of such that, for any , we have .
It’s clear here that simply because the supremum is being taken over more cardinals. It turns out that these cardinals are actually equal to each other, but now that I look at the proof, I can’t really make heads or tails of it. For now, I’ll go ahead and assume that the proof is correct and see if I can piece together what’s going on later.
Anyway, we have another cardinal invariant which appears, the definition of which I want to take some time to consider. Basically, the definition is probably incorrect, given the proof of theorem 6.1, so I want to repair it in a way that makes sense.
Def: (incorrect version)
This definition looks pretty horrible, but let’s take a look at what’s going on. First, let’s replace with its counterpart to make things easier. Now, recall here that
Then, note that if then certainly and so we have that the sequence of ideals is decreasing. Now, let’s suppose that is representable by , and fix a representation of . Then for any which is positive with respect to , we have that . Now, since is larger (hance has fewer positive sets), it follows that must be representable by . In other words:
Hence, that sup is achieved by . So, as written that definition seems somewhat strange. On the other hand, what would make sense is asking for a min instead of a sup, since we get a decreasing sequence of ordinals. Finally, my advisor, Todd Eisworth, pointed out that the proof of Theorem 6.1 doesn’t actually go through as written and requires that first min actually be a sup. Now, looking at the proof, it remains intact if we actually take the definition of to be:
Def: (correct version) .
This definition actually makes much more sense, and it makes the proof of theorem 6.1 go through. So, my suspicion is that the above is the correct definition of .
I wanted to use this post to briefly talk about the square bracket partition relation, as a lot of the work I’ve been doing recently has centered around a very particular instance of this.
Definition: Suppose , , and are cardinals, and that is an ordinal. The symbol
means that, for any coloring of the cardinality subsets of in colors, there is some such that . We say that is weakly homogeneous in this case.
Perhaps the best way to get a handle on the square bracket partition relation is to look a colorings of pairs. In particular, one question that appears in the literature is, given a cardinal , when does the relation hold? In order for this to fail, there has to be a coloring such that, for any with , we have that . That is, there is a coloring of the pairs of which is so pathological, that restricting it to any subset of size gives us a coloring which hits every color.
We can see the failure of this partition relation as a gross failure of Ramsey’s theorem for . So with that in mind, I’m going to focus on colorings of pairs in this post, as there is a very deep theory that comes from asking to construct pathological colorings of pairs.
For example, Todorčević was able to show that, for regular uncountable , we have using his walks on ordinals technique. Shelah was then able to further expand upon this to show that if has a non-reflecting stationary subset, then . In more recent work, both Shelah and Eisworth have been able to show that this, and an even stronger negative partition relation hold for many where is singular by combining the machinery of scales and club guessing with walks on ordinals.
I’ll probably return to that stuff in a later post. In particular, I want to give an overview of what’s known in the area, and motivate some of the open questions. I’ve been spending most of my time looking at colorings of all finite subsets of , but in that case one can actually work with elementary submodels instead. Lately, however, I’ve been interested in colorings of pairs, so posting about this will be a nice way of keeping myself on track. First though, I want to talk about a case when the square bracket partition relation does hold.
Theorem (Prikry): If is real-valued measurable, then .
Proof: Much like how we exploit the -complete, normal measure on a measurable to show that is Ramsey, we will use the ideal of null sets to find a weakly homogeneous set for any coloring.
So let be a -additive, atomless measure on with measure algebra all of . Let be the ideal of -null sets, and let denote the dual filter. It’s relatively easy to see that must be -complete and -saturated.
Now, fix a coloring . For each and , let
Note that for each , we have that . So by completeness and saturation of , we have that there is some such that . Now we recursively build an increasing sequence with the property that . This is easy by the -completeness of .
Now, note that since real-valued measurability of gives us that , we can find a set of of size such that for each such . So reindex and let be the set of corresponding . Then we have that gives us that and so . But the we’re done, as .
It should be noted that the failure of CH was necessary here by Todorčević’s theorem.
One of the projects on my plate for next semester is to understand Shelah’s original proofs of his Revised GCH from SH460. Despite the fact that the proof from Abraham and Magidor’s chapter in the handbook is comparatively easy to work through, it looks like there’s some good information smuggled in Shelah’s original proofs which make them worth looking at. I should point out that yes, there are in fact two proofs of the Revised GCH in SH460. The first uses generic ultrapowers, and I’m a bit wary of it, as it uses Chapter 5 of Cardinal Arithmetic as a black box. The second proof, however is more pcf-theoretic, and it seems a bit less challenging since the aforementioned handbook chapter is such a wonderful resource on the basics of pcf theory.
Before I get to any of these proofs, I plan on actually working through the Abraham-Magidor version of the proof. I haven’t done any pcf theory for a few months, and I want to go back and get reacquainted with the machinery. Before even doing that though, I want to take some time to motivate why The Revised GCH is an appropriate name for the theorem. Hopefully this will have the benefit of getting other people interested in the result, because it is genuinely surprising and pretty. This part is intended for a mathematical audience acquainted with the basics of set theory. As a result, it will be simultaneously far too curt, contain far too much information, sweep too many details under the rug, and provide too many specifics.
In SH460, Shelah starts off with looking at Hilbert’s 1st problem: The continuum hypothesis. The question itself is rather simple: Is it the case that ? It turned out that this question was quite difficult to answer. In fact, this question itself spurred the development of quite a bit of set theory, but we won’t be focussing on that. What is worth noting is that we can generalize this question to the following:
“Is is the case that, for any cardinal , the operation is precisely the cardinal successor operator on ?”
A positive answer to the above question is called the General Continuum Hypothesis (or GCH). Godel was able to show that GCH was consistent with ZFC (the “usual” axioms of set theory that most mathematicians take). That is, we can find an example of a “universe” in which ZFC holds, and GCH is true.
Now, in order to answer this question further, we would need to know more about the map . However, for some time the only thing we knew was the fact that it must obey two rules:
Due to the work of Easton, it turned out that these were the only rules for regular cardinals . That is, given any “function” from regular cardinals to cardinals obeying the two rules above, there is a universe in which the continuum function is precisely described by . This does provide a resolution to our question, but it seems very unsatisfying. Essentially, the continuum function on regular cardinals is arbitrary modulo two very minor restrictions. One thing to do here is to take this resolution as evidence that we’ve asked the wrong question, and instead look at how we can massage Hilbert’s problem into something more reasonable.
Here, we have two approaches. The first is to note that all of these issues are arising from the fact that we’re considering regular cardinals. So, we can ask ourselves about singular cardinals and see where we end up. This leads us to the singular cardinal hypothesis, and there has been a lot of fruitful investigation done in this vein by way of something called pcf theory. The other method is to see what we can say about regular cardinals, which is what we’re concerned about here.
One way of looking at GCH is that it says, roughly speaking “cardinal exponentiation is not too unruly”. So while we can’t say much about the continuum function, it may be worthwhile to look at the values of for regular. Perhaps we can ask that exponentiation behaves like sum and product for infinite cardinals, which brings us to the following first revision:
For regular cardinals , we have
Still though, this is not quite what we want. Part of the issue is that these values are too tied up with each other, so failures for small values of will imply failures all the way up. This is where Shelah introduces a revised version of cardinal exponentiation that allows for a finer slicing. First, some notation:
For cardinals , we set . We will also have occasion to use .
One thing to note is that we have , so looking at this collection isn’t completely unreasonable. For regular , then we define “ to the revised power of ” to be:
Now, this looks like a lot, but it’s not too bad. Essentially, we look at certain sorts of covering families , and ask what the minimum cardinality of such a covering family must be. Here though, our version of covering is that any is covered by a union of fewer than -many elements of . The obvious question is “what does this have to do with ?”
Claim: For every we have that if and only if and for every regular ,
First suppose that . Then certainly we have that , as . Further, for any regular , we know that
So we simply note that witnesses that . As the other inequality holds trivially, we’re done with this direction.
For the other direction, we proceed by induction on . So assume , and let be a family witnessing that . Let , and let be such that . Then is countable, and so we see that is isomorphic to a subset of insofar as it sits inside . Thus, we can associate to a unique and some , which yields an embedding of into . By assumption, this set has size .
Now let be regular such that our conclusion holds for each . That is, for each such , if for every regular , then . By assumption, we therefore know that . Thus, we also have that
As before, we begin by enumerating a family witnessing the fact that . We then fix , and let be such that . As before, since is of size , we see that is isomorphic to a subset of insofar as it sits inside that union. So we can associate to a unique and some ,which yields an embedding of into . By assumption, this set has size , and so we’re done.
So that above claim shows that looking at these revised powers is completely reasonable thing to do. The other nice thing is that Shelah and Gitik have shown that the values of and are independent of each other for . This brings us to the Revised GCH:
The Revised GCH Theorem (Shelah): Fix any uncountable strong limit cardinal . For ever there is some such that if is regular with , then .
Put more simply: for most pairs , we have that . Given our discussion above, this is indeed a theorem deserving of the name “Revised GCH”.
The first topic I’ll be covering in the Algebra and Set Theory seminar is Whitehead’s Conjecture, and I want to take some time to sketch out how those lectures are going to go.
In order to even talk about Whitehead’s conjecture, I’m going to need to talk about Ext. So given any abelian group , we say that a free resolution of is a short exact sequence of the form
where and are both free groups. Now, every abelian group has a free resolution as we can take to be the free abelian group generated by using as an alphabet, and to be the kernel of the surjection induced by . Hitting this complex with the functor yields the following cochain complex:
Now that we have a cochain complex, we can ask about the cohomology groups of this complex, which are denoted by . So what on earth does this tell us about my original group ? Let’s borrow some (misleading) intuition from algebraic topology. One nice fact is that if my fundamental group is trivial, and all cohomology groups are trivial, then my space (provided it’s a CW complex) is contractible. So maybe if is trivial I can say something about my group.
It turns out that we can! This cohomology group being trivial is equivalent to the statement that the only group extension of by is just the direct sum . This is coming from the equivalence of the “derived functor” presentation of Ext and the “classical” version of Ext in terms of extension classes (hence ext). That’s pretty cool, but what about that topology bit? Obviously I can’t say that is contractible, but can I say that is “nice” in some other way? This leads us to Whitehead’s conjecture.
Whitehead’s Conjecture: For any abelian group , if , then is free.
The converse is actually a really easy theorem since Ext is invariant with respect to which free resolutions we take, and we can always resolve a free group in the dumbest way possible:
Of course, the first cohomology group of the resulting cochain complex is trivial. What’s interesting about Whitehead’s conjecture is that it’s independent of ZFC, so the resolution comes in two parts. Here’s my plan for how we’re going to tackle that:
One thing that I find neat about this proof is that we get to touch on quite a bit of set theory as we go along, and it’s rather instructive as to how one can employ some powerful machinery to prove things about abelian group. The main reference I’ll be using for this part is Paul Eklof’s paper “Whitehead’s Problem is Undecidable”.
So one of my goals for this semester (or year) is to try and figure out what’s going on in Section 2 of Sh460. Of course, the section starts off by referencing Claim 6.7A of Sh430 and improving it (without mentioning what’s actually going on in that claim). Looking back at Claim 6.7A of Sh430, it turns out that this references some of the tools used in the proof of Claim 6.7, which gives us the existence of closed and transitive generators. Now it turns out that one of the things that we worked through in the summer school at UC Irvine (which I like to call pcf-fest 2016) is this very thing.
The proof that James gave was a bit different, but I think that claim 6.7A is really just making more explicit the relationship between transitive generators, universal sequences, and -IA elementary substructures. So what I’d like to do first is go back and work through the existence of transitive generators, and see how much of this stuff I can tease out along the way. Hopefully that’ll also put me in a good mindset to work through the Sh460 stuff. I figured that a good place to start is with the usual construction of generators and how they relate to universal sequences.
Throughout this, I’m going to let be a collection of regular cardinals, and put restrictions on it as necessary.
Definition: Let be a set of regular cardinals, and define
Here is just domination modulo . I will frequently bounce between and .
Definition: Let be a regular cardinal, then
Note that this is an ideal on .
Definition: We say that is a generator of if , and is generated from by .
In particular, we see that . Also note that if , then obviously for . So in the case that , asking for a generator is fantastically uninteresting. Now, let’s say that is progressive whenever .
Definition: Let , then is a universal sequence for if:
Note that if is an ultrafilter with , then . Otherwise, there is some with and . But then, since , which would mean that . This gives us another characterization of as the collection of subsets of which forces whenever they get assigned measure one by .
Theorem (Shelah): If is progressive, then for every , there is a universal sequence for with a exact upper bound .
Why are universal sequences useful? Well, if is a universal sequence for with exact upper bound , then the set is actually a generator for . Now, these generators are only unique modulo , and so we have some room to massage them. In the next post, I want to examine the possibility of doing just that.