Thu Jul 8 06:40:25 PDT 2004 T0231 Due 6 Aug 2004 Thu Jul 8 11:06:38 PDT 2004 Kevin Karplus Comparative model for 1f7sA (and other d.109.1.2 domains). Thu Jul 8 15:47:30 PDT 2004 Kevin Karplus try1-opt2 looks pretty good for the most part. I thouht I'd like to flip the first strand around so that it runs antiparallel to strand 3, but all the templates have it coming in parallel at the beginning of strand 3 as it does in try1-opt2, so I think I'll leave it alone. All the models from alignments are very similar, so I think all we need to do on this one is to pack things a bit tighter. I'll up the dry weights, clashes, and break costs, and replace the constraints with the strands and helices of try1-opt2. Thu Jul 8 19:23:43 PDT 2004 Kevin Karplus The t04 alignment seems to have greater diversity and identifies more key residues than the t2k one, so I prefer the scripts from it. The mutual-information constraints don't seem to be very useful for this target. The try1 cost function seems to prefer try1-opt2 to try2-opt2. The try2 cost function prefers try2-opt2. For try3, I took out the constraints and raised break, soft_clashes, and some of the "dry" weights to try to get good packing without forcing the model. The try3 weights prefer the try2-opt2 model, but only by a little bit. I'll run try3 from the existing models rather than from alignments. This may be the final polishing run. Fri Jul 9 09:34:44 PDT 2004 Kevin Karplus try3-opt2 scores well, but Rosetta likes try2-opt2.repack-nonPC better than the repacked version of try3. Sun Sep 19 18:57:33 PDT 2004 Kevin Karplus I put REAL_PDB:=1vkkA FINAL_COSTFCN:=try3 into the Makefile and did a whole-chain evaluation using rmsd. Our best model is try1-opt1 (at 2.1698 Ang), which we didn't submit. The order for submitted models and robetta models is model2, model1, model3, model5, model4, robetta3, robetta2, robetta5, robetta4, robetta1. Our best submitted model is at 2.22Ang and our model1 at 2.24, while robetta's best is 4.63 and number 1 at 11.21. We CLEARLY beat robetta on this one, and our hand-assisted predictions (not much hand assistance) beat the fully automatic prediction model2 try2-opt2.repack-nonPC model1 try3-opt2 model3 try1-opt2 model5 alignment almost complete model4 alignment We did improve over the plain alignment. Rosetta's repacking of try2-opt2 did improve it, but repacking of try1-opt2 and try3-opt2 did not improve them. Wed Sep 22 17:45:28 PDT 2004 Kevin Karplus With GDT score the order is model5 89.5985 model2 88.3212 model1 88.1387 model3 88.1387 full auto robetta4 72.6277 robetta3 71.7153 robetta2 69.8905 robetta1 69.7080 model4 69.3431 robetta5 60.1606 By this criterion, the model5 alignment did better than the later iterations, but the model4 alignment was worse. Undertaker did a fairly good job of extracting the good stuff from the alignments. And we beat robetta! Fri Sep 24 21:16:02 PDT 2004 Kevin Karplus Using new smooth_GDT score, we get name length missing_atoms rmsd rmsd_ca GDT smooth_GDT model5.ts-submitted 142 8 2.6047 1.7612 -87.4088 -80.7884 alignment model2.ts-submitted 142 0.0000 2.2208 1.5355 -85.9489 -79.6611 try2 repacked model1.ts-submitted 142 0.0000 2.2391 1.5456 -85.5839 -79.3687 try3 model3.ts-submitted 142 0.0000 2.2472 1.6256 -85.5839 -79.0084 full auto robetta-model3.pdb.gz 142 0.0000 4.6333 3.9880 -71.8978 -66.6969 robetta-model4.pdb.gz 142 0.0000 9.1556 8.3565 -70.2555 -66.2289 model4.ts-submitted 142 192 2.8503 2.2444 -68.9781 -64.0862 alignment robetta-model1.pdb.gz 142 0.0000 11.2148 10.4542 -68.4307 -64.0494 robetta-model5.pdb.gz 142 0.0000 8.7496 7.9674 -68.6131 -64.0420 robetta-model2.pdb.gz 142 0.0000 8.4984 7.7658 -67.5182 -62.7585