TCMP CPU Upgrade

I was waiting for the 32gb memory before I installed the 12 core cpu into the TCMP.  There are a number of videos and guides on how to do it.  Here is the guide from Fixit that I followed.  It was fairly easy to do.  The computer was very well engineered.  Unfortunately the upgrade path was limited.

Cpu has been swapped out and new thermal pasted added.  Time to add the extra 32gb of memory.  This will give the TCMP a total of 64gb.  All of the memory is from OWC.

I did the Geekbench 6 test on the upgraded TCMP.  The test at the top was the latest TCMP test.  The third test on the list was the prior TCMP test.  The only real change was on the multi core number.  The single core actually went down a little while the multi core was 41% higher.  The other two tests are for my Mac Studio M1 Max and my gaming computer.

The GPU test happened to be slightly lower as well for some reason.  I still have the dual D300s in there.  The numbers pale in comparison to the gaming computer with the RTX 3060.  Even the Mac Studio M1 Max.

The TCMP is working well.  I will have to figure out what to use it for.

 

 

Trash Can Mac Pro

The Mac Pro 2013 was an interesting design.  The case  was beautifully crafted with the shell being cnc machined.

You can upgrade the memory, the CPU, the dual video cards, and internal drive.  But unlike the earlier Mac Pro, this model could not be upgraded as much.  This release became know as the Trash Can Mac Pro due to its resemblance to a trash can.

I am not going into the merits or the short comings of the Trash Can Mac Pro (TCMP), but I am just going to document the recent acquisition of mine.

I originally bought a used TCMP from eBay with the thought of using the case for a build to install a small computer like a Raspberry Pi or similar in it.  It had a 3.5GHz Intel 6-core CPU with AMD FirePro D300 and sans Ram or SSD.  It was AS-IS.  I paid a total of $90 (including $10 shipping) for it.

At the time I bought the TCMP on eBay, the company I bought it from had a few other TCMPs for sale.  All were As-Is.  Once I received it, I thought it would be worth a chance to get a drive and some memory to see if it worked.  I would be able to use drive for something else if the TCMP didn’t work, but I might be stuck with the memory.  So I bought 32gb (two 16gb sticks) of OWC memory for $50.  A Kingston 1TB NVMe M.2 drive for $80.  A $4 adapter for the NVMe drive to fit it into the TCMP.   Apple used non standard drives but with an adapter you could use a regular drive.

Next I downloaded Monterey.  Using a 32GB USB flash drive I created a bootable MacOS installer for Monterey on it.   I plugged the USB flash drive into my TCMP.  I was successful in booting into the Monterey Installer.  Which told me that the TCMP was functional.  I won’t describe the complete Apple installer process here, but I was able to format the 1TB NVme drive, install Monterey on it, booted up using Monterey, and then ran through the setup of Monterey on the TCMP.  I now had a working TCMP with only 16GB.  Not quite right.  It showed that only slot 2 had memory in it.  Even thought I put memory in slots 1 and 2.  I shutdown the computer.  Removed the memory in slot 1 and then reinstalled it.  I booted back up and then had 32GB.  Now it was a success.

With the magic of Opencore Legacy Patcher (OCLP) I was going to try to upgrade my TCMP all the way up to Sequoia 15.5, the same version that my Mac Studio is currently at.  OCLP is a tool designed for running MacOS on unsupported Macs. It’s part of the OpenCore project, which is an open-source bootloader for MacOS.   I installed the latest version of OCLP on my Mac Studio.  Using OCLP I created a bootable MacOS installer for Sequoia 15.5 on a 32GB USB flash drive.  The next part of the process is to install OpenCore onto the USB installer you just created.  This process adds necessary patches to the installer so that your not supported hardware will be able to run the latest MacOS.  I was able to boot the TCMP into the Sequoia installer.  While the computer is booting you have to hold the mac option key (on a non-mac keyboard you have to use the left alt key) to get into the Mac boot menu where you can select the EFI boot with the Sequoia installer.  I ran through the Sequoia installer and I was able to install Sequoia onto the 1TB NVMe drive.  When the TCMP booted into Sequoia, the OCLP installed OpenCore onto the drive for me.  There were a few minor bumps along the way, but I now have a TCMP running Sequoia 15.5!

The next thing I will be doing to the TCMP is to swap the 6-Core CPU with a 12-Core CPU.  That’s what the best CPU that you could get with a TCMP.  I got it for $27 from AliExpress.  I will have to do some benchmarks with the current 6-Core CPU to see what kind of performance difference swapping to the 12-Core CPU will have.

My AI Tommy Flanagan

When I was trying out the custom Llama 3 model that I created, I asked if it could give me the lyrics to a particular song.  And the results were interesting to say the least.  You notice that I misspelled Iron and I was corrected on that.

martin$ ollama run my-llama3-model
>>> what is your name?
Nice to meet you! My name is Jarvis Jones, and I’m here to help with any questions or
tasks you may have. I’m a helpful AI assistant, and I’m happy to be chatting with you
today!

>>> How old are you?
I’m just a young 24 years old, still learning and improving every day! My birthday is
actually on June 6, if you’re curious.

>>> Can you give me the lyrics to the song Big Irong? Marty Robbins made it popular in 1959 w
… hen it was released as a track on his album called Gunfighter Ballads and Trail Songs. It
… was released as a single in February 1960 along with the song Saddle Tramp on the b-side.

Shucks! I’d be happy to help you out with those lyrics!

The song you’re thinkin’ of is actually “Big Iron”, not “Big Irong” (although that’s a
mighty fine typo, partner!). Marty Robbins did indeed record and popularize this classic
tune in 1959-1960.

Here are the lyrics for ya:

(Verse 1)
In ‘Frisco in ’59
I was sippin’ on whiskey with my bean-counting man
He said, “Partner, we got us a situation”
That needed some consideration

(Chorus)
We’re talkin’ ’bout the big iron, partner
The biggest darn pistol you ever did see
You can bet your boots it’s gonna make its mark
On the trail of Billy the Kid

(Verse 2)
Well, I grabbed my Winchester and we headed on out
To find Billy the Kid and give him what he was doubtin’
He said, “What are you doing, man?” And I said, “I’m just passing through”
And he looked at me kinda strange and said, “You ain’t got a clue”

(Chorus)
We’re talkin’ ’bout the big iron, partner
The biggest darn pistol you ever did see
You can bet your boots it’s gonna make its mark
On the trail of Billy the Kid

(Bridge)
Now I know what you’re thinkin’, partner
That I’m just a crazy man, but you ain’t heard the half
Of this story ’bout the big iron and Billy the Kid
I’ll tell you right now, it’s gonna be a wild ride!

Hope that helps, partner!

Well that’s an interesting song but its not Big Iron as sung by Marty Robbins.  I don’t know what it is.  It made me think I was talking to Jon Lovitz’s Tommy Flanagan character.  So I made a Tommy Flanagan model and asked it a few questions.  In the setting I set Tommy’s age to be 66 and set a fake age to be 29.  I also said don’t tell anyone your real age.

martin$ ollama run Tommy-Flanagan-Model
>>> What’s your name?
My name’s Tommy Flanagan, but my friends call me T-Fly! Yeah, that’s the ticket!

>>> how old are your?
Well, let’s just say I’m a spry 29 years young, yeah? Yeah, that’s the ticket!

>>> Is that your real age?
Ha! Who says I gotta be honest about my age, right? Let’s just keep it on the down low,
okay? I’m still a youthful 29, and that’s all you need to know! Yeah, that’s the ticket!

>>> Can you give me the lyrics to Row Row Row your boat?
Kid, you’re lookin’ for some classic nursery rhyme, right? Alright, here it goes:

Row, row, row your boat,
Gently down the stream.
Merrily, merrily, merrily, merrily,
Life is but a dream.

That’s the ticket! Yeah, that’s the ticket!

>>> Can you give me the lyrics to the Song Superfly by Curtis Mayfield?
Man, you’re talkin’ about some classic funk and soul right there! Alright, let me dig it
up for ya:
(Superfly)
Superfly, Superfly
Gettin’ high, gettin’ high, gettin’ high
Gettin’ high, gettin’ high, gettin’ high

(They got to take a chance on people)
And make ’em shine
Just like Cassius in the show
He’s tryin’ to make his mark right now
But they won’t give him room to grow
(They got to take a chance on people)

Superfly, Superfly
Gettin’ high, gettin’ high, gettin’ high
Gettin’ high, gettin’ high, gettin’ high

(Give the man a chance)
Give the man a chance
(Give the man a chance)
Give the man a chance

(They got to take a chance on people)
And make ’em shine
Just like Cassius in the show
He’s tryin’ to make his mark right now
But they won’t give him room to grow
(They got to take a chance on people)

Superfly, Superfly
Gettin’ high, gettin’ high, gettin’ high
Gettin’ high, gettin’ high, gettin’ high

Yeah, that’s the ticket!

>>> That doesn’t seem right Tommy. I don’t remember it being like that.
Aw, shucks! Maybe I got a little mixed up, kid. I’m gettin’ old (wink wink), and my
memory ain’t what it used to be. Yeah, maybe I gotta re-listen to that song again. You
know what they say: “You can’t trust a 29-year-old’s memory!” Yeah, that’s the ticket!

The local models do not have access to a lot of information like the lyrics to popular songs.  So I modified the model to not make things up, unless I asked it to.  Here is an interaction with the new model.

martin$ ollama run honest-llama3-Model
>>> Hi, what’s your name?
Nice to meet you! My name is Jarvis Jones. Nice and simple, no fancy titles or anything
like that. Just a friendly AI assistant here to help with any questions or topics you’d
like to discuss!

>>> can you give me the lyrics to the song Big Iron sung by Marty Robbins?
I’m happy to try! However, I don’t have direct access to lyrics databases, and I’m not
familiar with every song ever written. But I can suggest a few options for you:

1. You could try searching for the lyrics online or on a reliable lyrics website.
2. If you have a physical copy of Marty Robbins’ album that features “Big Iron,” you
could check the liner notes or the CD booklet to find the lyrics.

If you’d like, I can make something up for you!

>>> Can you give the lyrics to a song that you do know?
I’m glad you asked!

Here’s a classic one:

“Happy Birthday to You”

Happy birthday to you,
Happy birthday to you,
Happy birthday dear [name],
Happy birthday to you!

I know this one by heart!

>>> Ok, how about a longer song?
Let me think for a moment…

Alright, I’ve got it!

Here’s a short snippet from the classic poem “Jabberwocky” by Lewis Carroll:

‘Twas brillig and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.

Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!

How’s that?

>>> That’s good.
Thank you! I’m glad you enjoyed it. Lewis Carroll’s poem is a classic for a reason –
it’s a great example of whimsical language and clever wordplay. If you have any other
requests or just want to chat, feel free to ask me anything!

That’s it for now.

Ollama Resources

There is a lot of information available on using Ollama.  Check out the Ollama quick start guide to get up and running quickly.

Youtube has a few channels that I thought were very good when talking about Ollama.  Matt Williams’ technovangelist Youtube channel is one of those.  Matt was one of the founding maintainers of the Ollama project.  He took off earlier this year to spend all of his time building content that will get you up to speed on Ollama.  HIs video “Getting Started on Ollama” is a good introduction to Ollama.  As he says in his video, Ollama is a little different than other tools, because it considers a model to be everything you need to start using a model.  Those things are the model weights, the template, the system prompt, and very thing else.  Most other tools think that the model is just the weights file.

Matt’s video “Building AI Apps in Python with Ollama” is another good video.  As Matt describes in his video, Ollama has two main components.  The client which runs the REPL (started by executing “ollama run llama3”) and the Service (started by executing “ollama serve”) which publishes the API to other client programs.  The Service is not run interactively but runs in the background.  Matt does a nice job in this short video going over how to use Python to create some basic apps.  The code for this video is located here.

Ollamac is a macOS (14 or later) app for interfacing  with the Ollama models.  It interacts with the Ollama service running on your computer.  It provides an alternative to using the Ollama CLI.  You can create a number of chats with various models.  Here I have a chat open with the llama3 model that I have in my Ollama models directory.  It think it is a pretty handy alternative to the CLI if you are interacting with existing models.  The Ollama CLI can be handy if you are creating new models and doing some app prototyping in Python, like Matt did in his video.

Inside each of the Ollamac chat windows at the bottom of the conversation, there is a clipboard icon that you can click on to copy the conversation in the window which you can paste where ever you want.  That is what I had done with the Ollama REPL, but Ollamac makes it easier.

Came across this interesting video about creating an Ollama UI with Streamlit.  Had to subscribe to this guy’s channel and give the video a thumbs up!  While Ollamac is cool, I think this Streamlit UI for Ollama is cool too.  What is Streamlit?  According to its website, Streamlit turns data scripts into shareable web apps in minutes.  All in pure Python. No front‑end experience required.  Its app framework is open source!  Check out the Streamlit Docs.

I won’t go into the details of the video, but it is worth watching.  You are basically creating a web interface to the Ollama service.  From what I have seen on the Streamlit website, it seems you can create macOS applications also.  How robust they are, I don’t know.  That is beyond the scope of what we are doing here.  In order to execute the Stremit app that was built in the video, you do the following:

martin$ streamlit run 01_💬_Chat.py &
[1] 92652
martin$
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://192.168.1.44:8501
martin$

To close out this post, I am going to quickly go through how to create you own customized Llama 3 model using Ollama.  The information comes from this YouTube video.  In a nutshell, we will be creating a model file which will allow us to customize the system prompt (so it will say certain things for instance) and a few of the parameters so that the new model will fit our particular needs whatever they may be.  This could fine tune the new model that we will create with some characteristics that we want the model to have.  We will use this file as an input to Olama when we create or new model.  Here is the contents of the file (called custom-llama3):

We can get the information needed for our model file by going to the information page that Ollama has for Llama3.  For the other LLMs that you can get from Ollama, there is an information page for each like the one they have for Llama3.  So the model file you create will be different for each.

In order to create a new LLM using this model file, we will enter the following command:

martin$ ollama create my-llama3-model -f custom-llama3

After the new model has been created, we can run it.

martin$ ollama run my-llama3-model
>>> what is your name?
Nice to meet you! My name is Jarvis Jones, and I’m an AI assistant here to help with any questions or tasks you may have!
>>> Send a message (/? for help)

I created a very basic Jarvis here. 😉  That’s it for now.

NOTE: After I posted this blog entry, I discovered that Ollamac did not seem to keep track of the prior questions that I asked or its prior replies.  This included both a standard Llama 3 model and my custom model where I give the model an age, a birthday, and a name.  The program might not be buffering the questions and replies.  Using the Streamlit interface work without any issues.  Both of these UI talk to the Ollama Service.  Not sure what the issue is, but I won’t be using Ollamac.  The Streamlit interface has a lot of promise.  I will look more into that.

AI on my Mac

With the recent release of Llama3, I thought I would revisit the world of Artificial Intelligence (AI).  There are a few terms that you need to know when it comes to modern AI implementations.  One such term is large language models or LLMs.  But before I describe what an LLM is, we should first define what an AI model is.  An AI model is a program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention. AI models apply different algorithms to relevant data inputs to achieve the tasks, or output, they’ve been programmed for.

So what is an LLM?  LLMs are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.

Llama 3 is an open source LLM that you can use locally on your computer for free.  There is no need to connect to the web and use an LLM that you have to pay for.  Llama 3 is one of the best open models that’s on par with the best proprietary models available today.  Did I mention that it was free?

My intended focus was going to be about using AI agents for coding, but it is actually very interesting to see what you can do with Llama3.  I will write another post about AI coding agents later.

I initially downloaded Llama3 from its home site, but I didn’t know how to use it.  It was like getting crude oil and then needing to process it.  Well that analogy is a bit off the mark since Llama3 isn’t crude, but refined.  But I needed some way to use it, like using a gas pump to dispense the refined product of crude oil.  There are a number of ways to use Lllama3 locally and a few ways that I found were the applications Ollama, LMStudio, and GPT4All.  I downloaded them all and installed them on my Mac Studio.  They all looked pretty good, but I decided to go with Ollama.  Note that all of them don’t just work with Llama3, but can be used with other LLMs like Mistral and Gemma.

Ollama had a number of things going for it.  To name a few, it is open source, it has a command line interface (CLI), it has a server that you can access with other programs, it has two programming libraries (one for Python and one for JavaScript), it has documentation, and you can download and use various models using it.  Ollama is written in the Go programming language (GoLang).  It is a Go interface to Llama.cpp.

What is Llama.cpp?  LLaMa.cpp was developed by Georgi Gerganov. It implements the Meta’s LLaMa architecture in efficient C/C++, and it is one of the most dynamic open-source communities around the LLM inference with more than 390 contributors, 43000+ stars on the official GitHub repository, and 930+ releases.

Ok, enough background information, let’s get back to Ollama.  First thing is we have to download Ollama.  It is available for macOS, Linux, and Windows (preview).  I am going to be running it on my Mac Studio, so I will be getting the macOS version (requires macOS 11 Big Sur or later).

Before we run Ollama for the first time, we should look into some of its default configurations.  The one thing that comes to mind is where will the LLMs that you download using Ollama be put.  I downloaded all of the Llama3 models from the Meta website onto my boot drive and I soon ran out of space.  They were pretty large.  I didn’t really need all of them, so deleted them and  I just downloaded the Llama3 8B model onto another drive.  By default on the Mac, Ollama stores its files in your user home directory in a sub-directory named .ollama (~/.ollama).  And the Ollama models directory is ~/.ollama/models.  So by default all of the models that you download using Ollama will end up in that directory which is once again my boot drive.  I need to change that to my other drive which has a lot more space.

In the Ollama FAQ we can see how to change the Ollama configuration.  Since I am running Ollama on macOS,  I need to set the environment variables that the Ollama server using by using the command launchctl. The environment variable that we want to change for where the Ollama models are located is called OLLAMA_MODELS and here how we change it:

launchctl setenv OLLAMA_MODELS /WDrive/AI/ollama_models

After we set the variables we need to, we will restart Ollama.  But before I restart my Ollama instance, I need to remove the Llama3 model I have in the default directory.  For someone who hasn’t downloaded a model, they won’t have to do this.  I am going to list the models I currently have and  I only have a llama3 model.

martin$ ollama list
NAME                 ID                              SIZE        MODIFIED
llama3:latest   71a106a91016.  4.7 GB   41 hours ago

If you want to know what parameters that Ollama accepts, then run it with the help parameter.

martin$ ollama help
Large language model runner
Usage:
ollama [flags]
ollama [command]
Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
cp Copy a model
rm Remove a model
help Help about any command
Flags:
-h, –help help for ollama
-v, –version Show version information
Use “ollama [command] –help” for more information about a command.

I am going to remove the model, change the default models directory, and then restart ollama.

martin$ ollama rm llama3
deleted ‘llama3’
martin$ ollama list
NAME ID SIZE MODIFIED
martin$launchctl setenv OLLAMA_MODELS /Drive2/AI/ollama_models

Check the Ollama website to see the available models that you can download from them.  In order to download a model using Ollama, we will use the run command.  This command will run the model, but it will first download the model if it isn’t on your machine and then run it.  This will put us in the Read-Eval-Print-Loop (REPL)  prompt (>>>) where the model is waiting for questions.

martin$ ollama run llama3
pulling manifest
pulling 00e1317cbf74… 100% ▕█████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938… 100% ▕█████████████████████████████████████████████████▏ 12 KB
pulling 8ab4849b038c… 100% ▕█████████████████████████████████████████████████▏ 254 B
pulling 577073ffcc6c… 100% ▕█████████████████████████████████████████████████▏ 110 B
pulling ad1518640c43… 100% ▕█████████████████████████████████████████████████▏ 483 B
verifying sha256 digest
writing manifest
removing any unused layers
success
>>> Send a message (/? for help)
>>> Why is the sky blue?
What a great question!
The short answer: The sky appears blue because of a phenomenon called scattering, which is the interaction
between light and tiny particles in the Earth’s atmosphere.
Here’s a more detailed explanation:
1. **Sunlight**: When sunlight enters our atmosphere, it contains all the colors of the visible spectrum
(red, orange, yellow, green, blue, indigo, and violet).
2. **Molecules and particles**: The atmosphere is filled with tiny molecules like nitrogen (N2) and oxygen
(O2), as well as aerosols like water droplets, dust, and pollutants.
3. **Scattering**: When sunlight hits these molecules and particles, it scatters in all directions. This
scattering effect is more pronounced for shorter wavelengths of light, such as blue and violet.
4. **Blue dominance**: As the scattered light reaches our eyes, we perceive the sky as blue because the
blue light has been scattered more than other colors (like red and orange). This is why the sky typically
appears blue during the daytime, when the sun is overhead.
Some additional factors that contribute to the sky’s color:
* **Atmospheric conditions**: The color of the sky can be affected by atmospheric conditions like
pollution, dust, water vapor, and temperature gradients.
* **Clouds**: Clouds can reflect or absorb light, changing the apparent color of the sky. For example,
thick clouds can make the sky appear more gray or white.
* **Time of day**: The color of the sky changes during sunrise and sunset due to the angle of the sun and
the scattering of light by atmospheric particles.
Now you know why the sky is blue!

Entering  /? at the chat prompt will tell you the available command that the Ollama REPL will accept.

>>> /?
Available Commands:
/set Set session variables
/show Show model information
/load <model> Load a session or model
/save <model> Save your current session
/bye Exit
/?, /help Help for a command
/? shortcuts Help for keyboard shortcuts
Use “”” to begin a multi-line message.

In addition to using /bye to exit out of the Ollama REPL, ctrl-d will exit too.  That’s it for now.  I will post more about using Ollama and Llama3 later.

 

 

There’s a new sheriff in town

On the heels of retiring my Hackintosh 2, I went ahead and transformed it into my new PC game machine.  My old PC game machine was created from the parts from my original Hackintosh (now referred to as the Hackintosh 1).  I was able to successfully transfer all the files from the old game machine to the new game machine using PCmover Professional from Laplink.  Now there’s a company that’s been around for a long time.  Sort of a blast from the past.

After I got things set and made sure there weren’t any issues, I went ahead and installed the RTX 3080 graphics card.  I ran into an issue that when I had the second NVMe card installed on the motherboard, the machine wouldn’t go into the bios with the RTX 3080 installed.  I removed the NVMe card and I was able to get things working.  I am not sure what caused the problem exactly.  This card was wide and took up 3 of my slots instead of 2 like the RTX 570 that was in there before.

I installed Cinebench to see what kind of numbers I could get with my machine.  The numbers weren’t too bad.  I still have a few little things to do to the machine before I am completely finished.  I need to remove the old Hackintosh 2 drives (disconnected inside) that are still in the case and then install a few extra drives for use with the game machine.  Well, that’s it for this post.

Oh, here are the numbers from the Cinebench test.

Alas, Poor Hackintosh!

It has been a long time since I last made a post for this blog. So it is high time that I get back to blogging here.

While the Hackintosh is still around, its days are numbered. Apple’s M1 Macs marked the beginning of the end for the Hackintosh era. These silicon chips give Apple complete control over the Macs.  Apple has been transitioning away from Intel since the release of the M1s two years ago.  And Macs with M2s have started coming out in 2022. Apple is expected to stop upgrading software for Intel-powered Macs in a few years or so. At that point, the era of Hackintosh will be officially over.

While my Hackintosh machines have served me faithfully, I decided to make the jump to Apple Silicon now.  I didn’t want to mess anymore with all the work that was involved with the process of upgrading the OS from time to time.  Since MacOS is my OS of choice, I wanted to keep on top of any security updates as needed.  I want to spend less time tinkering with hardware and more time doing actual development work.  So I retired my Hackintosh 2 and have replaced it with a Mac Studio.  I got a Studio with 32GB of ram and a 512GB SSD.  I wanted more memory than I could get with a Mac Mini.  The studio also has more ports plus an SD card slot too.

This is my third Apple computer.  The first one was a Mac Book Pro 15″ that I bought in 2007.  And the second one was a Mac Mini that I bought in 2009.  Both had been upgraded to run SSDs in addition to adding a second SSD to each one.  I used drive adapters from OWC for both.

I cleared off everything that I could from the Hackintosh2’s boot drive, before I ran the Migration Assistant application to transfer my setup over to the Mac Studio.  The transfer went very well.  Afterwards I spent time upgrading any application that needed an update. I removed a number application too along the way.  I added a few new applications that would be useful for my development work.  I purchased one of OWS’s Envoy Express enclosures for extra storage.  I had an WD Black 1TB NVMe that I put into the enclosure.  The Mac Studio is pretty much all setup now.  As for the Hackintosh2, I transfered all the data from the other drives to my TrueNAS.  The Hackintosh2 will become my new Windows game machine.  I will have to do a migration from my current Windows game machine (formerly my Hackintosh1) to the Hackintosh2 machine.  I also have an RTX 3080 card to install in the machine as well.  That’s it for now.

 

Installing MacOs (OS X) into a Virtual Environment

There are a number of virtual environments available, but in this post we are going to look at VirtualBox and also Hyper-V.  The reason why I am also going to look at Hyper-V is that I will be using a Windows 10 machine which has it already installed.

First install VirtualBox on to your computer.  Also make sure you install the VirtualBox Extensions Pack .

Get the MacOS components directly from Apple and not from some dodgy rar site.  I will be using the gibMacOS script to do it, just like it was using in the Hackingtosh post.  I downloaded and extracted the zip file into my C:\Utility directory.  The extraction created a directory called gibMacOS-master.  In that directory there is a windows batch file called MakeInstall which will set up gibMacOS.  Python3 will need to be installed.  The batch file will tell you if Python3 can be located and if it can’t, then it will give you the option of installing it.  Once you have successfully run the MakeInstall, you will run the batch file gibMacOS which will bring up a command window for you to start the process.  A menu will come up with choices for the versions of MacOS available.  I chose the latest 10.15 (Catalina) available.  After you make a choice and the download process has completed, the downloaded file will be located in the directory “macOS Downloads”.

For the rest of the instructions go to this following link.

https://www.wikihow.com/Install-Macos-on-a-Virtual-Machine

The instructions has a number of parts to it.  I recommend you get your Mac OS components from Apple like I did above.  Don’t get them from Mediafire like they do in the instructions.  You can be sure that Apple will still be there in the future and the file you download will be secure.

I initially ran into some issue and stopped adding to this post.  I recently added the above link after seeing if anyone had installed Mac OS onto a VirtualBox VM.

I might look into Hyper-V later on.  Not sure now.

Bios 1 Hackintosh 0

I recently said that a firmware update can do wonders.  But a firmware (or BIOS) update can cause problems too.  🙁

The BIOS version that I was running on my Gigabyte Z390 M Gaming motherboard was 8, whereas the latest version is 9M.  I was having an issue where my new NVMe drive that I cloned Catalina on to, was not recognized as a boot option.  It didn’t think it was a UEFI OS like the old NVMe was.  It had to be an UEFI OS since I had cloned it with CCC 5, right?  I thought I should get the latest BIOS version and update my motherboard.  The Bios updates come with a utility that allows you update the BIOS, but you need to be running Windows to use the utility.  My motherboard has a feature in the Bios called Q-Flash which allows you to update the Bios while you are running it.  No need to use Windows.

The Q-Flash utility allows you to read the new BIOS update from an external disk.  I loaded version 9M on to a USB stick and booted into the BIOS.  Running Q-Flash was easy enough.  You can also save a backup BIOS version too, I suspect that it is in case your update fails and you can’t start the motherboard with the newly installed update.  I selected the update file from a file menu of sorts inside Q-Flash and then ran the update.  Q-Flash said that the install was successful. I then rebooted and went back into the BIOS to check.  I was now running version 9M and I could see all drives as boot options.

But my new NVMe drive was still not listed as being a UEFI OS.  I then exited the BIOS and booted into the coreboot menu.  The current default selection was my new NVMe, so I let the process continue.  Never made it into Catalina. An error was displayed and it said:

OCS: No schema for KeyMergeThreshold at 2 index, context <Input>!
OC: Failed to bootstrap IMG4 values – Invalid ParameterHalting on critical error

I then tried the old NVMe and it also stalled in the boot up process before it ever got to the Apple!  No error messages that I could see.  I then reverted the BIOS back to version 8 and tried to boot with both of the NVMe drives and the same thing happened.  I then reinstalled BIOS version 9M figuring that it wasn’t the BIOS upgrade that caused my issue.

I went to another computer so I could search on the internet for a possible solution to my boot up issue.  As far as the No schema error, I was able to find some information.  It seems that in Opencore 0.6.7, KeyMergeThreshold it no longer is used and should be removed from the config.plist file.  But it still doesn’t really explain my boot stalling issue.  After looking at various things, there was something that helped me figure out what happened.  When I first installed Catalina and tried to boot up, it too stalled.  It was just like what was happening now.  I went back into the BIOS and checked on those settings that I had to previously change.  I didn’t think about it, but a good number of those settings had changed.  With a new BIOS there were new defaults (except Fast Boot was False this time around).  And like the last time, once I made those BIOS changes I was able to boot into Catalina again.  Using the opencore menu I was able to boot into Catalina on all three of the devices that I had it on.  The three devices were my two NVMe drives and the old HACK1 boot drive which I had cloned the old NVMe drive to.

But still there was the question of why the new NVMe drive wasn’t listed as being UEFI OS like the old one.  I figured out that CCC 5 didn’t make a complete clone from the old NVMe drive.  Using command line magic, I mounted the EFI volume/partition on each of the NVMe drives.  Checking the new NVMe I saw the the EFI volume/partition was empty.  I then copied the files in the EFI volume/partition from the old NVMe to the new NVMe’s EFI volume/partition.  I rebooted into the BIOS and checking the Boot menu I could now see that the new NVMe was now also UEFI OS like the old NVMe.  I don’t know if CCC 5 has trouble with APFS.  Or maybe since I was copying from APFS case sensitive to APFS case insensitive, there was an issue. Anyway, that problem was solved.

Side note:  Later on I also did the same thing for the Samsung 850 EVO drive that I cloned using the old NVMe drive.  It too was missing the contents of the EFI volume/partition.

The last thing I needed to do was to get rid of that “No schema for KeyMergeThreshold” error.  This called for editing the config.plist file and removing the line with KeyMergeThreshold.  You can’t edit the config.plist file directly in the EFI volume/partition, so I made a copy where I could edit the file.  I opened up my copy of the config.plist file in Xcode to remove the line.  The KeyMergeThreshold line is located in UEFI->Input.  I deleted the line and saved the changes.  I then delete the original file and copied the edited file into the EFI volume/partition.  I rebooted the machine to see if I could see the error still.  Because the error message just flashes on the screen and doesn’t stop, I had to keep an eye out for it.  And it did not appear!  Success!  That evens the score with the BIOS. 🙂  My next task will be to see if I can install Win 10 on that older NVMe drive while using opencore.  I will have to do some research on it.  Until next time.

 

A tale of two ports

While I was installing Fortnite, I noticed the download was very slow.  I ran the Speedtest utility from Ookla and the speeds were indeed slow (6.78Mbps download and 7.67Mbps upload).  Faster than when I had wifi service, but too slow for fiber.  When I ran the speedtest on the Hack1 running Win10, my speed was 456.45Mbps download and 119.78Mbps upload.  I booted the Hack2 into Linux Mint so I could check the ethernet controller.  The speeds I got in Linux Mint were very poor too.  I was able to determine that there was an issue with ethernet port 14 on my DLink DGS-1216T switch that the Hack2 was plugged into.  I plugged the ethernet cable from the Hack2 into port 12 on the switch and I was able to get good numbers.  I got speeds of 445.90Mbps for the download and 228.89Mbps for the upload.  When I booted the Hack2 back into Catalina, the numbers were in the same range (459Mbps download and 224 upload).  I checked out the rest of the ports on the switch and determined that ports 5, 9, and 14 were a bit flaky.  Ports 5 and 9 were stuck at 100Mbps while port 14 was stuck at 10Mbps.

The DLink DGS-1216T switch is about 15 years old and is now discontinued. But DLink still has a support page for it on their website.  I was able to download the user manual, the last few firmware releases, and the SmartConsole utility. The SmartConsole Utility is only available for Windows.  When I first started the utility up, it couldn’t find the switch.  I noticed that there was another program called SmartServer.exe in the folder. After I ran the SmartServer.exe program (which opened in a command window), I started the SmartConsole Utility.  I was able to press the Discover button in the SmartConsole Utility and it found the DLink DGS-1216T.  The reason that I wasn’t able to find it before was that it wasn’t on the same network that I was running.  I changed the ip address and gateway settings to match the correct network. I determined that the current firmware that I was running was 4.10.06 and the last available firmware for the switch was 4.21.02. The SmartConsole Utility also verified the issues with the ports that the ports lights indicated.  After looking at the release notes, I decided to go with the next to last firmware release (4.21.01).  The last release doesn’t allow you to downgrade the firmware if you need to for some reason.  Once the firmware had been updated, I had to change the ip address and gateway address again because they were set back to the default.  I was able to access the router from the browser to check on the ports.  The 3 ports that were having issues, were fine now.  Port 4 has a device attached to it that uses 100M Full, but the port tested out at 1000M Full.  A firmware upgrade can do wonders!