Sun Sep 24 18:00:14 PDT 2006 Kevin Karplus As of 24 Sep, the easiest target was T0346, and the hardest was T0314. T0346 had 72% identity to 2gw2A over the whole protein, and excellent hits to other cyclophilin (b.62.1.1) domains (SAM_T06 liked 37 templates better than 2gw2A), most of them having about 50% identity. BETTER ALIGNMENTS THAN MODELS T0383 GDT 27.42% model1, 28.23% model2, 44.15% align1 Alignment was incomplete, we have the fold right, but moved the strands out of alignment. This was our most egregious damage to a model, where our best submitted was much worse than the first alginment. T0329 GDT 46.76% model1, 64.64% align1 Our best model (not submitted) has the same GDT as align1. We have the outer domain right, but the inserted helical domain is not quite right. T0349 GDT model1 38.33%, model4 57.67%, align1 51.67% The "correct" model seems to have been trashed a bit in T0349.model1-real.pdb, because the NMR model in 2hfvA is longer on the N-terminus than the sequence given to us and the crude alignment strategy has picked out the wrong residues to align to. We should fix the alignment of PDB files to the target sequence in undertaker, and redo this evaluation. T0323 GDT model1 28.57%, align1 40.55% Closing the big gaps seems to have moved things around too much. This all-alpha model might have benefitted from picking up helix-packing constraints from the initial alignments. Mon Aug 21 17:03:27 PDT 2006 Cynthia Hsu MUCH BETTER AVAILABLE THAN SUBMITTED Running the following gnuplot (best of ours, best submitted): gnuplot> plot 'gdt.summary' using 2:3, x The following are noticeable points: (2,3) = (31,19) T0304 (2,3) = (44,28) T0383 (48,35) T0358 (2,3) = (50,40) T0323 # (2,3) = (87,71) T0380 bogus: the 87% comes from a refinement model T0304: Professor Karplus already updated the README with his comments. In essence, it was an ab intio target in which several of our generated models, particularly the early ones (try3, try5, etc.) were much closer to the end result than our later models. T0383: Also updated in the README by Professor Karplus. In this model, it appears that our scoring cost function is not accurate, and this may have contributed to why the proper server model failed to be selected. In addition, try23 does better than try28 and try27, though the latter models were polishing runs. The difference seems marginal, however, so I'm not sure where the skewed values in the gnuplot are coming from. T0358: Fri Sep 1 14:32:33 PDT 2006 Kevin Karplus We worked from the wrong alignment here. The align3 model was the best we had. T0323: Fri Sep 1 14:32:54 PDT 2006 Kevin Karplus We probably worked from the wrong alignment here. The align3 model was our best with a GDT of 50.5%, and the best we submitted was model4 (GDT 40.8%), which was align1. We should have submitted one fewer polished models and one more alignment. SUMBITTED VERSUS SAM_TO6_server Tue Aug 22 13:52:55 PDT 2006 Cynthia Hsu Running the following gnuplot (bestsub, samt06): gnuplot> plot 'gdt.summary' using 3:7, x The results are favorable - all of our best submitted models improved upon those of the servers. SUBMITTED VERSUS ROBETTA Tue Aug 22 13:52:55 PDT 2006 Cynthia Hsu Running the following gnuplot (bestsub, rob1): gnuplot> plot 'gdt.summary' using 3:8, x [Thu Aug 31 17:38:51 PDT 2006 Kevin Karplus rob1 is now column 9, and we should really be comparing best submitted with best robetta, or model1 with rob1. Fri Sep 1 15:11:08 PDT 2006 Kevin Karplus rob1 is now column10. ] The following are noticeable points: (3, 8) = (40.6, 54.4) T0293 (3, 8) = (30.6, 47.7) T0350 (3, 8) = (26.7, 36.7) T0383 T0293: Described in the README by Professor Karplus. Our hand modified models did much worse than the server model, and almost all of the ROBETTA models did better than ours as well. The correct model was also never submitted. T0350: As with T0293, our hand-modified models did worse than the server model and the ROBETTA models do much better. T0383: As before, the Undertaker unconstrained costfunction ranked both the Robetta and the Raptoress models, which are more accurate, below those of our own server model. SAM_T06_server VERSUS SAM_T02 The SAM_T06 models are sometimes better and sometimes worse than the SAM_T02 alignments. Average quality is somewhat better for SAM_T06_server SAM_T02 much better: T0369 (28,49) Our best model (not submitted) was try12-opt2.gromacs0, with GDT 62.2%. SAM_T06_server_TS1 did very poorly at 28.3% GDT, but the automatic alignment was fine at 49.1%. The best SAM_T06_server alignment was the second alignment (TS3) at 49.0% and even its first alignment (TS2) was ok at 48.5%. Undertaker botched the good alignments in the server. try1-opt2 was not so botched (GDT 43.2%). The T99 and T02 servers had first alignments of 53.6% and 49.4%---it looks like an AA-only alignment was better for this target. T0323 (36,50) As for our hand model, closing the big gaps seems to have moved things around too much. This all-alpha model might have benefitted from picking up helix-packing constraints from the initial alignments. The alignments SAM_T06_server_TS2..TS4) were better in GDT than the TS1 model. The best was SAM_T06_server_TS4 (GDT 50.1%), but SAM-T99_AL5 got up to 53.3%.