Tue May 30 09:18:13 PDT 2006 T0304 Make started Tue May 30 10:42:22 PDT 2006 Running on lopez.cse.ucsc.edu Tue May 30 11:10:23 PDT 2006 Kevin Karplus No good hits with BLAST (best E-value 0.658) No pdb sequences in the multiple alignments. Only weak hits with w0.5 models, and not much consistency among 2-track models. It looks like this one will be a new-fold target. Make started Tue May 30 12:45:22 PDT 2006 Running on farm09.cse.ucsc.edu Tue May 30 12:45:45 PDT 2006 Kevin Karplus lopez crashed (or something) at 11:12 this morning. I sent job to the farm cluster, though the machines there are slow, they aren't subject to the power problems in PSB today. farm09 is particularly slow, since parasol currently thinks it has 5 processors, not just 2. Tue May 30 19:20:40 PDT 2006 Kevin Karplus No good hits. The best E-values are 1rybA (11.16) and 1gulA (13.24). This will require ab-initio techniques. Fri Jun 16 09:19:06 PDT 2006 Kevin Karplus None of the alignments have more than small fragments, and try1-ot2 does not have a good match to 2ry prediction Our models score best (compared with the server models) with unconstrained.costfcn. The next best (possibly really better) are Pmodeller6_TS4 Pmodeller6_TS3 ROBETTA_TS3 ROBETTA_TS1 Pcons6_TS3-scwrl Pcons6_TS3 Pmodeller6_TS3-scwrl ROBETTA_TS3-scwrl ROBETTA_TS1-scwrl Pmodeller6_TS4-scwrl GeneSilicoMetaServer_TS3 SP3_TS2-scwrl SP4_TS2-scwrl SP3_TS2 SP4_TS2 The GeneSilicoMetaServer model is all-helical trash, but the four server models: ReadConformPDB Pmodeller6_TS4.pdb ReadConformPDB ROBETTA_TS3.pdb ReadConformPDB Pcons6_TS4.pdb ReadConformPDB SP3_TS2.pdb are all interesting potential models (and quite different from each other). All have some disagreement with the secondary structure predictions, though none as bad as our try1. The top three models: Pmodeller6_TS4.pdb ROBETTA_TS3.pdb Pcons6_TS4.pdb all agree on the long helix, though the do rather different things with the strands. Sat Jun 17 13:30:52 PDT 2006 George Shackelford So I think I'll try to do refinements of each of these four as backups and one try using them as group for our own effort. That should do for monday's soft deadline. I'm going to use the standard polishing approach for the first 4 tries. Then we can use the polished versions to generate a new try. I'll use T302 as a model. Try2 running on peep Sat Jun 17 17:22:54 PDT 2006 George Shackelford After a couple false starts, I finally think I've got the polishing routine down. I now working on ReadConformPDB ROBETTA_TS3.pdb ReadConformPDB Pcons6_TS4.pdb as try3 and try4 try3 and try4 running on orcas ReadConformPDB SP3_TS2.pdb as try5 try5 running on peep (now that try2 as just finished) Sun Jun 18 01:14:13 PDT 2006 George Shackelford The tries have finished and scored. SP3_TS2 (try5) comes out in front despite bad breaks and sidechains that are not as good as ROBETTA_TS3 (try3). Those two were close and Pmodeller6_TS4 (try2) and Pcons6_TS4 were not too far behind. It was interesting to see the fourth choice showing up as first after polishing. I wonder if other servers might have moved up to first place as well if they were polished? Now I'm going to take all the tries including our first one, put the constraints back in, and see what Undertaker can come up with. Not quite a polishing but I am going to exclude the original hits for now and put read-pdb.under as our source. I'll put the TryAllAligns back in as well. Let's jiggle it around and see what we can come up with. try6 running on peep. Sun Jun 18 12:40:43 PDT 2006 George Shackelford Try6 is a sightly more refined version of try5. I needed to pull the sheet configurations of the most attractive tries, add them to the constraints and redo using selected templates and their alignments. We need something different to add to the other four. I am still concerned about the fit to ehl2.True, we're dealing with a new fold but I'd still like to match the ehl2 better. Looking at the different models, I like the looks of ROBETTA_TS3 (try3) best for its sheets, however one part of the sheet is predicted to be a helix. I am going to assume that it is a helix and treat the ending part as parallel. That matches well with the t06.str2 predictions of a helix to parallel edge. The sheets match well with the best of the rr predictions as well. I'm building a set of constraints for that and doing yet another run. I'll leave it wide open; whatever comes out will likely be our submission for the soft deadline. # From try3 sheets we use: SheetConstraint (T0304)G31 (T0304)Q36 (T0304)A113 (T0304)D108 hbond (T0304)A32 2.0 # SheetConstraint (T0304)Q36 (T0304)E37 (T0304)L41 (T0304)R40 hbond (T0304)E37 0.0 # SheetConstraint (T0304)H42 (T0304)D46 (T0304)L53 (T0304)G49 hbond (T0304)Y43 0.0 SheetConstraint (T0304)V90 (T0304)A94 (T0304)A101 (T0304)L97 hbond (T0304)L92 2.0 SheetConstraint (T0304)L97 (T0304)T103 (T0304)L112 (T0304)S106 hbond (T0304)T98 2.0 # and add this one: # 33-36 | 50-53 hbond 33 SheetConstraint R33 Q36 I50 L53 hbond R33 2.0 try7 running on peep Sun Jun 18 20:28:27 PDT 2006 George Shackelford Try7 is a weak effort to fulfill the constraints; we really want that sheet to form. I need to see what it takes to restrict us to something close enough. So far using all we have is creating a broken solution. This is not good. I can't seem to think of anything at this point. If I can find a template that matches what I want in the sheet, I could at least focus on that. Otherwise I am stymiied by the diversity of alignments. Mon Jun 19 17:33:34 PDT 2006 Kevin Karplus The soft-deadline is tomorrow at noon, so I wanted to do a submission tonight, but George has not left me any clear indication of which models to submit. Mon Jun 19 17:51:15 PDT 2006 Kevin Karplus I looked at score-all.unconstrained.pretty and score-all.try1.pretty, and both agreed that the order should be try6,try5,try3,try4,try2,try1 But these do not all have the long helix, which I rather like. I made a new costfcn "strong-helix.costfcn" which increases the weight for the long helix substantially, and uses only the dssp-ehl2 constraints (not the rr constraints or sheet constraints): HelixConstraint D56 T78 5 # 0.6551 The ordering with this is try6, try5, try3, try4, try2, try7, try8-opt1, try1. Try8 really does get the helix, but not much else. try4 also gets the helix, but (like try3) turns the short helix into a strand. try5 and try6 are almost identical models. George still hasn't fixed his .cshrc file so that decoys/grep-best-rosetta gets made properly. I remade it and try-opt2 is the one that rosetta most like repacking. For the soft submission I think that I'll do try3-opt2 try6-opt2 try4-opt2 try2-opt2 try1-opt2 Now I need to figure out the history of each model try3 <= ROBETTA_TS3 try6 <= try5 <= SP3_TS2 try4 <= Pcons6_TS4 try2 <= Pmodeller6_TS4 try1 <= alignments I don't believe that any of our submitted models are right and are still trying to come up with a model that has a topology that we can use. We may have to use Proteinshop to construct the desired topology manually, from whatever model is currently closest. Mon Jun 19 18:23:17 PDT 2006 Kevin Karplus I submitted the 5 models above, but we really need to work on this one still. Mon Jun 19 21:45:57 PDT 2006 George Shackelford NOTE: I put the required change in .cshrc to get decoys/grep-best-rosetta to run only to find that it caused my KDE desktop to malfunction. I had to remove the change if I wanted to work at school. The selection by Kevin was all I could have come up with. I am still struggling to get Undertaker to work as I would like. I'll take out the including of all alignments but leave the "ReadFragmentAlignment NOFILTER SCWRL all-align.a2m" line in. I don't quite understand how this affects things. I'm commenting out the rr.0.1.constraints and uncommenting the "include T0304.dssp-ehl2.constraints." I don't know how I managed to get that commented in the first place. I'd like to have some alignments from ROBETTA_TS3 but it is not clear how to get them. I accidentally replaced try8.costfcn with try9.costfcn. I am using "extra.log" instead of "tryNN.log" when I am running only one try. This recycles this log which is ordinarily unnecessary. I'll revert to the "tryNN.log" approach only when running more than one try. try9 running on peep. Mon Jun 19 22:31:29 PDT 2006 George Shackelford From what I've seen of "grep pool try9.log" I am assuming that the effort is not going well. There are references to 1zcjA and 2gaiA and 1z9lA which I think I've seen before when grepping "best." I think I'll just start a try10 without the "ReadFragmentAlignment" statement. It seems to really cut down the options but I don't think I have a choice. So try10 is the same as try9 but without the "ReadFragmentAlignment" statement. try10 running on peep. Tue Jun 20 03:23:31 PDT 2006 George Shackelford try10 is again a mess. I have to review what I am doing. To summarize: My favorites are: try3 <= ROBETTA_TS3 try6 <= try5 <= SP3_TS2 try4 <= Pcons6_TS4 try2 <= Pmodeller6_TS4 try1 <= alignments I can't add the 'fix' to .cshrc; it causes my KDE desktop to fail. I have tried to get the sheet constraints to form a sheet with two helices. These have failed every time, whether with all alignments available or restricted to try3 which contains the closest sheet with one helix. Mon Jul 3 11:02:45 PDT 2006 George Shackelford Using the experience from other targets, I am revisiting what I've done here. It is time for a fresh approach. I am running the match program to look for similar sequences based on ehl2 and burial: # Target: T0304 # length: 122 # length range: 113 to 134 # alphabets used: # ehl2 burial # id score per residue 5S 10N 10N 1ew0A 386.42 3.16737 1kpf 382.139 3.13229 1cc3A 377.429 3.09368 2azaA 373.443 3.06101 1cuoA 370.245 3.03479 1jzgA 369.393 3.02781 1nwpA 365.779 2.99819 1eteA 363.049 2.97581 1regX 361.894 2.96635 1qtoA 361.739 2.96507 1ijxA 361.638 2.96424 8rucI 361.495 2.96307 3rubS 361.096 2.9598 1eo6A 359.3 2.94508 1gnuA 359.234 2.94454 1d4xG 357.929 2.93385 1poc 357.636 2.93144 1h9dA 356.274 2.92028 1qd9A 355.865 2.91692 1qu9A 355.37 2.91287 So this list has 1kpf which is missing in an earlier list with a slightly different scheme of mapping the log probabilities. I wonder what the real difference is? We do the usual routine of two tries with ten templates each. We'll see how they do against the server models... Wed Jul 5 00:41:51 PDT 2006 George Shackelford Try12 and try13 did ok. I went and did try14 and try16 (a restart of try15) and got some decent results. I need to work on the three strand sheet that is part of the end and see what I can get. More on try14 and try16 later this wed morning. Wed Jul 5 17:20:34 PDT 2006 I'm going to do a fresh start with specific constraints. I'm going to set up the sheet constraint that occurs at the end of the sequence. SheetConstraint (T0304)H88 (T0304)A94 (T0304)T103 (T0304)L97 hbond (T0304)L92 1 SheetConstraint (T0304)K100 (T0304)S106 (T0304)P116 (T0304)V110 hbond (T0304)G102 1 The question I have is what to do with the R33-Q36 strand. Does it do the following? SheetConstraint (T0304)R33 (T0304)L34 (T0304)Y111 (T0304)L112 hbond (T0304)R33 1 Or is it anti-parallel? SheetConstraint (T0304)R33 (T0304)L34 (T0304)L112 (T0304)Y111 hbond (T0304)R33 1 Or is it "not at all"? Try6 scores well but it does so by breaking our ehl2 constraints. I'm not sure I like that. I've got what I like from try6.sheets. Ok, I'll just start with the last sheet. try17 running on shaw Wed Jul 5 20:20:42 PDT 2006 George Shackelford try17 comes from 2gaiA. try17 got one part of the sheet right, but the other part turned into a helix and a helix turned into a strand. I don't buy that. The results scores well but not as well as it could. I'm going to take 2gaiA out of the picture and see what comes up. I'm also going to crank up the constraints a bit and do a fresh run. I included the appropriate *.under files as lines in try18.under and commented out the references to 2gaiA. I believe I have what needs to isolated isolated. try18 running on shaw. Thu Jul 6 16:16:55 PDT 2006 Try18 doesn't do too well compared to the best tries but it does do enough to indicate where to put the next strand as part of the main sheet. There still needs to be another layer to cover the buried area - I don't know where that will come from. First things first. The new strand is added by: 30 - 34 ^ 30 114 - 110 hbond 35 we'll see. try19 running on vashon 1ew0A 451.032 3.69698 3.30.450.20-130 1kpf 435.765 3.57184 3.30.428.10-111 1cc3A 428.905 3.51561 2.60.40.420-130 1regX 427.574 3.5047 3.30.70.650-122 2azaA 426.223 3.49363 2.60.40.420-129 1qtoA 425.896 3.49095 3.10.180.10-122 1eteA 424.12 3.47639 1.20.1250.10-134 1cuoA 423.967 3.47514 2.60.40.420-129 1jzgA 421.72 3.45672 2.60.40.420-128 1d4xG 421.614 3.45585 3.40.20.10-124 1nwpA 419.655 3.43979 2.60.40.420-128 1poc 418.745 3.43234 1.20.90.10-134 1ewjA 418.485 3.43021 3.10.180.10-119 1bylA 416.355 3.41274 3.10.180.10-122 3rubS 415.736 3.40767 3.30.190.10-123 1ecsA 415.612 3.40665 3.10.180.10-120 1rrpB 415.498 3.40572 2.30.29.30-134 1cklA 415.039 3.40196 ,2.10.70.10-62,2.10.70.10-64 1ttbA 414.849 3.4004 2.60.40.180-127 1h9dA 414.294 3.39585 2.60.40.720-125 Im going to check out 3.10.180.10 and 3.33.xxx while I'm waiting on try19 Try20 running on vashon try20 focused on 1ew0A. It is looking good but we've done 1ew0A before. try21 running on vashon try21 focused on 1ewjA. Not bad. Does ok. try22 running on vashon Forced to focus on 2.60.40.180. try22 is trash; I'm not going to push anymore in that direction. Fri Jul 7 12:58:01 PDT 2006 George Shackelford Summary: All tries 2-6 are polished versions of servers. try7: weak effort to achieve constraints lifted from tries 3,6 try8: accidentally got trashed try9: Latches onto the original alignments because of "ReadFragmentAlignemt" statement. try10: Took out the "ReadFragmentAlignment" statement and got trash. try11: based on 1cuoA(?) try12: based on 1d4xG(?) try13: based on 1jzgA(?) == start of using the 'long' alignments try14: based on 1ew0A of long try15: failed due to bad setup try16: based on 1jzgA oflong align try17: based on 2gaiA of all-align try18: based on 1vpkA of local-str2+near try19: based on 1zcjA of local-str2+near try20: based on 1ew0A of long (AGAIN!) see try14 try21: based on 1jzgA of long (AGAIN!) see try16 try22: based on 1cklA of long (trash) Looking a score-all.unconstrained we find: try6 try5 try13 (needs breaks and soft clashes fixed) try11 ( " ) try3 try17 (improve the sheets) try4 try20 (sheets, breaks, soft clashes) try1 ( breaks, soft clashes) try12 ( less than 1 point separates try12 and try4!) The use of the 'long' aligns is hurting not helping in getting good scores. I may retry some of the alternate bases using read-scwrl-alignments. Time to do some polishing... Does it do better to polish a opt2.gromacs.pdb or opt2.pdb ? try23 is polishing try13.gromacs try24 is polishing try13 Fri Jul 7 15:24:40 PDT 2006 George Shackelford try23 seems to be doing better than try24. I think I'll use gromacs0 for polishing try11 try25 is polishing try11 on orcas Fri Jul 7 22:08:45 PDT 2006 George Shackelford Lost some input here due to a system failure... try23 did better than try24. try11 as try25 did quite better. I'm going to do the same for try20, try1, and try12. I need to look at the sheets in try17 before I can do more with it. try20 as try26 running on orcas try1 as try27 running on orcas try12 as try28 running on shaw I've looked at try17 and I don't see any way to improve the sheets. I'm going to hold on this til tommorrow. As I look as some of the latest results, I suspect I need to rerun using the regular opt2.pdb in place of opt2.gromacs0.pdb. Or I need to check the impact of pred_alphas! Sat Jul 8 13:47:49 PDT 2006 George Shackelford The weights used on pred_alphas in unconstrained do hurt try28. But that is not what bothers me the most. I went in and reviewed our best scoring tries (i.e. those try2-try6) and I don't really like now the rasmol ehl2 and near scripts look on try26 and try28. I do like the way try14 handles ehl2 and near. It looks more likely. I had a problem with my 'long' alignment; I'm going to redo using the read_alignments for 1ew0A. Perhaps we can get a better alignment that scores well. try29 running on shaw. Sat Jul 8 16:20:20 PDT 2006 George Shacekelford try29 makes a nice sheet - then foams up with too many helices. I'm going to retry and crank up the beta weights from 50/100 to 100/200. Let's see if we can force more sheets and less helices. I'm including a couple of other chains (1regX and 1pkf) which are similar to 1ew0A and see if something decent comes out of it. try30 running on shaw Sat Jul 8 16:26:09 PDT 2006 Kevin Karplus There is a lot of information above, but I'm having a hard time figuring out which 5 models George is currently favoring. The superimpose-best.under script has ReadConformPDB T0304.try13-opt2.pdb ReadConformPDB T0304.try11-opt2.pdb ReadConformPDB T0304.try12-opt2.pdb ReadConformPDB T0304.try1-opt2.pdb ReadConformPDB T0304.try8-opt2.pdb Are those the current 5 favorites? Sat Jul 8 17:48:42 PDT 2006 Kevin Karplus I see that George has started more runs, but has not put any comments here about what they are attempting to do or what models he currently favors. Sat Jul 8 22:43:29 PDT 2006 George Shackelford try30 and try31 didn't do very well. I'm dropping them. I'm still having a hard time deciding which ones I like. try24 scores best with unconstrained. (both try23 and try24 come from try13) try11 needs re-polishing. When I did it as try25, it failed even though the 'best' scores it was getting looked very good. Is something going wrong? ********** my best choices for now ********* try24 top scoring on unconstrained try11 next best on unconstrained try17 third best scoring try20 best fit to ehl2 and near as well as well scoring try12 scores about the same as try20. I tried to polish try17, try20, and try12 but the polished versions did not do as well as the originals. I'm going to re-polish them and see what I can get. polishing: try11 as try32 on shaw try17 as try33 on orcas try20 as try34 on shaw try12 as try35 on orcas Sun Jul 9 09:14:34 PDT 2006 Kevin Karplus George, please remember to update superimpose-best.under and remake best-models.pdb.gz when you have a new set of top choices, so that I can look at them quickly. I'm trying to juggle at least 6 different targets all the time, and extra time spent doing this for you means I have less time to look at the models. Also, what do you mean that try25 "failed". It didn't crash and it improved the cost substantially from where it started, so undertkaer was doing what it was supposed to. If that isn't what you *wanted* it to do, you need to be more explicit (both to me and to undertaker) about what you want. What was try25 supposed to do that it didn't do? Maybe then I could help you figure out how to change the costfcn or optimzation to get closer to what you want. Of the models George listed, only try20-opt2 is really compatible with the secondary structure prediction. It's true that we have too little diversity in the multiple alignment to put too much faith in the secondary structure prediction (and there is essentialy no signal for rr predictions). try17-opt2 is far too open in the center. Sun Jul 9 09:31:11 PDT 2006 Kevin Karplus There are 4 CYS and 5 HIS residues, so there is a strong presumption of metal binding, but no attempt has been made to cluster these residues. I just notice that the try1.costfcn was *not* the automatically generated one. It was stomped on Jul 3, probably when George was creating try11.costfcn. I have recreated it, and I *hope* it is the same now as the original. Sun Jul 9 09:38:53 PDT 2006 Kevin Karplus I have created a secondary.costfcn, which has constraints from just the secondary structure prediction. It clearly prefers a model from the servers to anything Geroge has found: try6-opt2 (from try5-opt2 from SP3_TS2) try5-opt2 (from SP3_TS2) try34-opt2 (from try20-opt2 from 1ew0A_long) try31-opt2 (from 1i1qA) try24-opt2 (from try13-opt2 from 1jzgA) try26-opt2 (from try20-opt2.gromacs0 from 1ew0A_long) try23-opt2 (from try13-opt2.gromacs0 from 1jzgA) So the top 7 runs reflect 4 lineages: try6-opt2, try5-opt2 from SP3_TS2 try34-opt2, try26-opt2 from try20-opt2 from 1ew0A_long try31-opt2 from 1i1qA try24-opt2, try23-opt2 from try13-opt2 from 1jzgA Given that this is an ab-initio model, I think that we need more distinct models and less time spent polishing them. I see no discussion above about how the sheet should be formed, and what try6 has one strand that was predicted to be helix, but is otherwise fairly reasonable. try34 looks feasible, but has the long helix broken up. try31 has a nice bit of three-strand sheet, and a long helix, but everything before the long helix looks wrong (made into two long helices). try24 is a rather crummy-looking sandwich that has really messed up the long helix. Sun Jul 9 09:44:29 PDT 2006 George Shackelford I have rerun score-all.unconstrained.pretty. We have: try24 try32 try34 try33 try35 I have updated superimpose-best.under and remade best-models.pdb.gz. The "problem" I had with try25 may not have been a problem, but when I grepped 'best' on the log file I got (and still get) at the end: # generation 298: best score out of 40: T0304.try25-opt1 161.69035 cost/residue, 50 clashes 0.01765 breaks # generation 299: best score out of 40: T0304.try25-opt1 161.62218 cost/residue, 50 clashes 0.01769 breaks # generation 300: best score out of 40: T0304.try25-opt1 161.61887 cost/residue, 50 clashes 0.01769 breaks But when I checked score-all.try25.pretty I got: T0304.try25-opt2.pdb.gz 9.0 8.6 8.5 46.2 62.3 51.5 17.9 -0.6 5.3 4.4 -19.5 -0.3 3.4 0.1 5.0 0.2 0.2 0.2 -5.1 -3.7 -11.3 -18.1 0.0 164.24 I was bothered by the loss of some three points. Did I "break" Undertaker? But worse is against unconstrained.pretty where compared to the latest polishing of try11 as try 32 I find: T0304.try32-opt2.pdb.gz 8.8 8.7 8.5 47.5 62.0 51.2 17.8 -0.6 3.3 4.2 -24.0 -0.9 2.5 0.1 5.6 1.8 1.9 2.4 -2.5 -3.7 -11.5 -17.7 0.0 165.61 T0304.try25-opt2.pdb.gz 9.0 8.6 8.5 46.2 62.3 51.5 17.9 -0.6 5.3 4.4 -19.5 -0.3 1.7 0.1 1.7 3.5 4.1 5.2 -2.6 -3.7 -11.3 -18.1 0.0 173.89 Most of the difference comes from higher scoring of pred-alphas and a lower scoring of break. Nevertheless I saw this as a failure from Undertaker getting the "best" results and from using try11-opt2.gromacs0.pdb.gz to start the polishing rather than try11-opt2.pdb.gz. The reason I was interested in using gromacs0 is because the main problem with try11 (and others) comes from the breaks and gromacs0 does a good initial job in healing the breaks. Furthermore gromacs0 moves the atoms about so we may avoid a local minimum (although we may end up in a new and worse local minimum). I had done a number of polishings using the gromacs0.pdb so I have now rerun using the opt2.pdb instead. Sun Jul 9 13:08:30 PDT 2006 Kevin Karplus Using gromacs to get out of a local minimum often works. The gromacs0.repack-nonPC are often even better, since rosetta repacking fixes some of the damage gromacs does to the sidechains, though often making the undertaker score even worse (since it may re-introduce clashes). There are diminishing returns for this sort of optimization, though, and repeatedly cycling through the optimizers may make very little difference. This sort of polishing should probably be reserved for models that are believed to be very close to correct, in which tiny movements of the backbone or sidechains are worth doing. I don't believe any of our models are close to that point---we are probalby better off exploring wildly different topologies, rather than polishing the ones we have. George, did you pick your 5 best to be from different families? I don't want multiple copies of essentially the same prediction for this target, but 5 distinctly different models. You've also ignored try6 (which comes from a server) even though it scores the best with both unconstrained and secondary cost functions. Some of the differences in scoring you are seeing George, come from different weighting of the costfcn components, but there may be a bug in undertaker which causes it to ocassionally lose track of a small break, and think that the cost is slightly lower than it would be if the model were read in from scratch. I'm not certain of this, but if so, the bug ierver_TS1-scwrl 10.4 8.6 9.5 64.6 77.2 61.3 18.0 -1.6 16.7 9.8 6.3 2.0 36.0 1.3 15.6 3.9 5.4 6.7 -0.8 -0.8 -0.4 0.0 0.0 349.76 0.0 0.4 7.6 0.3 7.1 0.3 -0.4 -0.4 0.0 0.17 Zhang-Server_TS1 9.8 8.6 9.5 59.7 77.6 61.0 18.2 -1.6 16.7 9.8 20.6 2.0 12.0 1.3 15.6 3.9 5.4 6.7 -1.1 -0.8 -0.4 0.0 0.0 334.45 0.0 0.4 7.7 0.3 7.1 0.3 -0.4 -0.4 0.0 0.17 Zhang-Server_TS4 9.4 9.1 8.8 62.8 83.7 63.5 19.1 -1.6 11.6 8.0 13.7 0.7 10.9 0.9 10.7 4.0 5.4 6.7 -1.4 -1.2 -0.0 0.0 0.0 324.82 0.0 0.4 8.5 0.3 7.9 0.3 -0.3 -0.3 0.0 0.28 Zhang-Server_TS4-scwrl 9.5 9.1 8.8 62.6 85.6 71.3 19.2 -1.6 11.6 8.0 -4.9 0.7 28.1 0.9 10.7 4.0 5.4 6.7 -1.0 -1.2 -0.0 0.0 0.0 333.63 0.0 0.4 8.6 0.3 7.9 0.3 -0.3 -0.3 0.0 0.28 Zhang-Server_TS3 8.9 8.4 8.4 53.3 68.4 53.4 18.0 -1.6 15.8 8.8 22.2 2.1 11.8 1.3 14.0 3.4 4.5 5.6 -1.6 -1.7 -0.4 -0.1 0.0 302.79 0.0 0.4 8.7 0.3 8.2 0.3 -0.4 -0.3 0.0 0.28 Zhang-Server_TS3-scwrl 9.2 8.4 8.4 55.6 72.9 54.0 18.1 -1.6 15.8 8.8 2.3 2.1 34.5 1.3 14.0 3.4 4.5 5.6 -1.3 -1.7 -0.4 -0.1 0.0 313.77 0.0 0.4 8.7 0.3 8.2 0.3 -0.4 -0.3 0.0 0.28 Zhang-Server_TS2 9.4 8.2 8.7 56.2 82.4 59.2 17.9 -1.5 16.3 8.6 21.8 -0.9 11.5 1.3 12.4 3.8 5.1 6.4 -1.4 -1.6 -0.8 0.0 0.0 323.15 0.0 0.4 10.0 0.3 9.3 0.3 -0.4 -0.3 0.0 0.29 Zhang-Server_TS2-scwrl 9.5 8.2 8.7 64.9 88.2 62.4 18.0 -1.4 16.3 8.6 1.9 -0.9 30.5 1.3 12.4 3.8 5.1 6.4 -1.2 -1.6 -0.8 0.0 0.0 340.32 0.0 0.4 10.1 0.3 9.3 0.3 -0.4 -0.3 0.0 0.29 T0304.try3-opt2.gromacs0 9.0 8.3 8.3 47.7 66.4 54.9 19.1 2.8 1.3 4.2s probably in HealGap, which does have the ability to discard breaks that it thinks are too small to keep track of. My list: try6, try34, try31, try24 George's list: try24, try32, try34, try33, try35 Combined: try34, try24, try31, try32, try33, try35, try6 Looking at the models to reduce to only 5, I'll drop out try33 and try35, leaving try34, try24, try31, try32, try6 I could be talked out of try31, though it has one of the best 3-strand sheets, because the extra helices are terrible. Sun Jul 9 13:36:28 PDT 2006 Kevin Karplus I have submitted with comment Our results for this model were awful. We did not get initially get a model that included the strongly predicted helix for D56-T78. We ended up looking at server models that scored well with our unconstrained cost function, and searching for even more remote fold recognition models. We did not have the time to do real "ab initio" modeling of this target. Model 1 is try34-opt2 from try20-opt2 from alignment to 1ew0A. Model 2 is try24-opt2 from try13-opt2 from alignment to 1jzgA. Model 3 is try31-opt2 from alignment to 1i1qA. Model 4 is try32-opt2 from try11-opt2 from alignment to 1cuoA. Model 5 is try6-opt2, optimized from SP3_TS2. It not only has the long helix, but it scores best with our cost functions. George, if you want to change the submission, please send me e-mail. Sun Jul 9 13:21:29 PDT 2006 George Shackelford Perhaps I have read this wrong, but it appeared to me that this sequence has a lot of hydrophobics and that the structure would need at least three layers to cope with the hydrophobicity. Try6 looks like a nice structure but I see it as too exposed. Superimposing show that the first two models, try24 and try32 are too similar. I'd drop try32, but I can't seem to find a decent replacement Tue Jul 25 14:15:36 PDT 2006 Kevin Karplus The correct solution is 2h28A, and no one got very close. The best server wash Zang-Server_TS1 with a GDT of 39%. Our best model was try3-opt2.gromacs0 (GDT 31%, based on ROBETTA_TS3), but our best submitted was model1 (GDT 18%). Our second best set was try4 (based on robetta9, via Pcons6). The top with undertaker alone was try17-op2.gromacs0 (GDT 28.5%) So, overall, I'd say we messed this target up pretty bad, not recognizing halfway decent models and chasing bad ones. It seems that our soft submission (with try3 in front) was better than our final submission. Wed Jul 26 20:22:52 PDT 2006 Kevin Karplus George asked me > Kevin, please look at T0304.try34-opt2.pdb, its parent 1ew0A, and the > solution 2h28A. The parent is the top scoring using "alphabetmatch." I > believe that there is merit in examining a range of possible templates. > Doing so takes some extra computer time but not much human time. I looked at the evaluate scores which were poor (GDT 18%, when our best was try3 at 31% from Robetta and the Zhang server got 39%). Try34 wasn't even the best we generated ourselves, so I'm not sure what George's point was. Fri Sep 8 15:45:28 PDT 2006 Kevin Karplus We did have a decent model: T0304.try3-opt2.repack-nonPC, which we did not submit (GDT 30.6, 9.7 Ang RMSD_CA, 48.5% of the correct Hbonds present). We should have stuck with our soft submission. Note: try3 was a polishing of a robetta model, but was better than the robetta model.