Thu Jun 26 10:42:37 PDT 2008 T0473 Make started Thu Jun 26 10:43:10 PDT 2008 Running on peep.cse.ucsc.edu Thu Jun 26 12:45:18 PDT 2008 Kevin Karplus T0472 has a modest E-value for 2fi0A (0.045), but the fit for the try1-opt3 model based on it looks pretty good. This short protein is a bit small for the rr predictions to be of much use, so polishing the try1 model is about all I think we need to do on this one. C30 and C33 are conserved, but don't appear to form a disulfide. H48 is also conserved and near C30. H29 currently seems to point away from C30. The 2fi0 protein is not much help in figuring out function, since is also of unknown function (or was in 2005) and has no ligands. Thu Jun 26 17:35:44 PDT 2008 Kevin Karplus try2-opt3.gromacs0 scores best with try2 costfcn, which indicates a need for further polishing to reduce breaks---mostly the break before P34 Thu Jun 26 22:14:13 PDT 2008 Kevin Karplus Something really weird is going on in try3---it start out optimizeing try2-opt3.gromacs0, then reduces the breaks until it thinks they are gone, but on rescoring the models in score-all.try3.pretty, the try3 models seem to have much worse clashes than try2. What went wrong? Must be an undertaker bug. all.clashes reports the bad clashes involving S36, F63, M1, and other residues. Why are these getting missed during the optimization? Fri Jun 27 09:17:49 PDT 2008 Kevin Karplus I'll make try4 run, identical to the try3 run except for the names and the initial starting seed, and see if the problem recurs. If not, I'll come back to debugging undertaker when I have some spare time. If it does recur, I'll have to dig in this weekend and figure out the bug. Fri Jun 27 11:27:03 PDT 2008 Kevin Karplus It seems that try4 has failed in the same way. More correctly, it did not see try3-opt3 as bad and went ahead and polished it, leaving in the terrible clashes. Fri Jun 27 15:07:52 PDT 2008 Kevin Karplus NOT an undertaker bug! It was a bug in the try3.under and try4.under files. I had accidentally scaled the soft_clashes by 0 instead of by 0.5 initially, so the clashes were being ignored. I'm trying again in try5, but with soft_clashes not being scaled away. Sat Jun 28 09:05:16 PDT 2008 Kevin Karplus try5-opt2 now scores best with the try5 costfcn, with try5-opt3 a little behind. The difference seems to be that try5-opt2 has worse clashes, and try5-opt3 has worse breaks. Rosetta does like try5-opt3.gromacs0.repack-nonPC best. I'll turn up both clashes and breaks for try6 and see if I can get the gaps to close without bad clashes. The models are moving very little, so the rest of the model (away from C33,P34) seems pretty stable. Sat Jun 28 09:52:52 PDT 2008 Kevin Karplus try6-opt2 scores better than try6-opt3, and the scores that the optimization run was getting are quite different from what the score-all run saw. I think that the optimization lost track of the break, which seems dangerous. I think it may be a result of HealGap deciding that a break is small enough to be healed, while re-reading the model sees a big enough break to count it. I may want to modify the code for HealGap to use the same test that is used when breaking a chain into segments. Sat Jun 28 10:08:41 PDT 2008 Kevin Karplus OK, I made that change to HealGap and will now try polishing in try7, to see if the gaps stay properly closed. Sat Jun 28 10:40:52 PDT 2008 Kevin Karplus Nope---try7-opt1 has a break before G32, but try7-opt2 has a bigger break there and a still bigger break before C33. try7-opt3 has both of those plus a break before E39. The optimization run was not seeing these breaks. Why not? Could it be that CloseGap is also messing up? Sat Jun 28 10:48:47 PDT 2008 Kevin Karplus OK, I fixed CloseGap to run find_breaks also---let's see if try8 has the problem fixed. Sat Jun 28 11:12:58 PDT 2008 Kevin Karplus Nope, that didn't do it either---the optimization run thought try8-opt3 was best, but the score-all run liked try8-opt2 better, and had substantially higher costs. Sat Jun 28 11:19:16 PDT 2008 Kevin Karplus I found one more operator that tried to remove gaps (TweakPhi) and so I fixed it up to check find_breaks also. Trying again for try9. Sat Jun 28 12:36:58 PDT 2008 Kevin Karplus Nope, that's still not it. try9-opt1 scores better than try9-opt2 better than try9-opt3, but the optimization run didn't think so. The operators that try9 thought made improvements (in order of number of improvements in producing opt2 are ## Method aborts kept better / tries = success_prob avg_improvement ## TweakPsiSubtree 12 583 140 / 686 = 0.20408 0.00217 ## HealGap 0 517 102 / 969 = 0.10526 0.00603 ## TweakPsiSegment 8 301 53 / 578 = 0.09170 0.00122 ## TweakPeptide 5 31 9 / 58 = 0.15517 0.00127 ## HealPeptide 0 72 7 / 209 = 0.03349 0.00050 ## TweakPsiPhiSegment 20 167 4 / 269 = 0.01487 0.00115 ## Backrub 40 177 3 / 236 = 0.01271 0.00071 ## FixOmega 6 70 2 / 170 = 0.01176 0.00040 ## OneRotamer 0 142 1 / 411 = 0.00243 0.00010 ## BigBackrub 56 132 1 / 195 = 0.00513 0.00114 ## TweakPsiPhiSubtree 8 68 1 / 101 = 0.00990 0.00033 ## TweakPhiSegment 3 6 1 / 68 = 0.01471 0.00000 ## TweakPhiSubtree 6 8 1 / 60 = 0.01667 0.00000 Ah---there is a remove_gap in TweakPsiSegment that I hadn't protected with find_breaks. Also one in TweakPhiPsi. I'll fix those and try once more. Sat Jun 28 13:15:12 PDT 2008 Kevin Karplus OK! that seems to have fixed the problem. I think that T0473 is done, but I should check it against the metaservers. Sat Jun 28 15:53:10 PDT 2008 Kevin Karplus T0473 has the same predicted fold as T0469, and both appear to be possible sulfite reductases. Does this help us any? Probably not, as the sequences already are included in each other's t06 alignments (as gapless alignments). Mon Jun 30 11:35:11 PDT 2008 Kevin Karplus The MQAC model-quality assessment favors an unusual set of servers: RAPTOR_TS4 0.847 MULTICOM-RANK_TS1 0.846 MULTICOM-CLUSTER_TS1 0.845 HHpred2_TS1 0.844 RAPTOR_TS5 0.844 MUProt_TS2 0.844 MULTICOM-CMFR_TS1 0.843 MUProt_TS4 0.843 MUProt_TS3 0.843 MUProt_TS1 0.843 RAPTOR_TS1 0.843 while MQAU favors the usual suspects: SAM-T08-server_TS1 0.776 Zhang-Server_TS3 0.765 Zhang-Server_TS2 0.762 Phyre_de_novo_TS1 0.759 MULTICOM-RANK_TS1 0.759 pro-sp3-TASSER_TS1 0.758 RAPTOR_TS4 0.757 METATASSER_TS4 0.757 pro-sp3-TASSER_TS3 0.757 MULTICOM-CLUSTER_TS1 0.755 I've started the two metaserver runs MQAU1 and MQAC1 to see what comes out. I suspect that SAM-T08-server will be favored by the MQAU1 run. Mon Jun 30 21:44:48 PDT 2008 Kevin Karplus Indeed the SAM-T08-server_TS1 model is favored by MQAU1, but MQAC1 favors RAPTOR_TS4. The differences between the models is very small, and probably nt worth fussing with any more. I'll submit ReadConformPDB T0473.try10-opt3.pdb # < try9-opt1 < try8-opt2 # < try7-opt3 < try6-opt3 < try5-opt3 < try2-opt3.gromacs0 < try1-opt3 # < align(2fi0A) ReadConformPDB T0473.MQAC1-opt3.pdb # < RAPTOR_TS4 ReadConformPDB T0473.MQAU1-opt3.pdb # < SAM-T08-server_TS1 ReadConformPDB T0473.try5-opt3.gromacs0.repack-nonPC.pdb # best Rosetta energy ReadConformPDB T0473.try2-opt3.gromacs0.pdb Tue Jul 1 15:58:00 PDT 2008 SAM-T08-MQAO hand QA T0473 Submitted Tue Jul 1 15:58:00 PDT 2008 SAM-T08-MQAU hand QA T0473 Submitted Tue Jul 1 15:58:00 PDT 2008 SAM-T08-MQAC hand QA T0473 Submitted