What are actual Tesla M60 models used by AWS?What are some projects I can build on Amazon AWS to build my...

Declaring and defining template, and specialising them

Examples of a statistic that is not independent of sample's distribution?

Can I pump my MTB tire to max (55 psi / 380 kPa) without the tube inside bursting?

How does one describe somebody who is bi-racial?

Do items de-spawn in Diablo?

How can I ensure my trip to the UK will not have to be cancelled because of Brexit?

Reversed Sudoku

Counting all the hearts

In the late 1940’s to early 1950’s what technology was available that could melt a LOT of ice?

What are some noteworthy "mic-drop" moments in math?

Accepted offer letter, position changed

In the quantum hamiltonian, why does kinetic energy turn into an operator while potential doesn't?

If I receive an SOS signal, what is the proper response?

NASA's RS-25 Engines shut down time

Is "history" a male-biased word ("his+story")?

Can one live in the U.S. and not use a credit card?

weren't playing vs didn't play

Word for a person who has no opinion about whether god exists

Why does liquid water form when we exhale on a mirror?

Why would one plane in this picture not have gear down yet?

Does this video of collapsing warehouse shelves show a real incident?

How to draw cubes in a 3 dimensional plane

PTIJ: Should I kill my computer after installing software?

Conservation of Mass and Energy



What are actual Tesla M60 models used by AWS?


What are some projects I can build on Amazon AWS to build my skills?Which AWS features are EBS backed?What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?What are the supported TLDs for AWS Certificate Manager?How accurately are aws clocks synchronized between different aws regions?Can AWS IAM be used on any non-ec2 instances?AWS Fargate: What does “duration” mean?What are ingress security groups in AWS / TerraformAWS ALB resolves to 2 IPs. What are they?AWS GuardDuty: Why are we getting, probably false, PortProbeUnprotectedPort findings













1















Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



> sudo nvidia-smi
Tue Mar 12 00:13:10 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
| N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 6779 C python 7362MiB |
+-----------------------------------------------------------------------------+


What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










share|improve this question







New contributor




hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

























    1















    Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



    I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



    > sudo nvidia-smi
    Tue Mar 12 00:13:10 2019
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
    |-------------------------------+----------------------+----------------------+
    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    |===============================+======================+======================|
    | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
    | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
    +-------------------------------+----------------------+----------------------+

    +-----------------------------------------------------------------------------+
    | Processes: GPU Memory |
    | GPU PID Type Process name Usage |
    |=============================================================================|
    | 0 6779 C python 7362MiB |
    +-----------------------------------------------------------------------------+


    What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










    share|improve this question







    New contributor




    hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.























      1












      1








      1








      Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



      I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



      > sudo nvidia-smi
      Tue Mar 12 00:13:10 2019
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
      |-------------------------------+----------------------+----------------------+
      | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
      | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
      |===============================+======================+======================|
      | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
      | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
      +-------------------------------+----------------------+----------------------+

      +-----------------------------------------------------------------------------+
      | Processes: GPU Memory |
      | GPU PID Type Process name Usage |
      |=============================================================================|
      | 0 6779 C python 7362MiB |
      +-----------------------------------------------------------------------------+


      What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?










      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.












      Wikipedia says that Tesla M60 has 2x8 GB RAM (whatever it means) and TDP 225–300.



      I use an EC2 instance g3s.xlarge which is supposed to have a Tesla M60. But nvidia-smi command says it has 8GB ram and max power limit 150W:



      > sudo nvidia-smi
      Tue Mar 12 00:13:10 2019
      +-----------------------------------------------------------------------------+
      | NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
      |-------------------------------+----------------------+----------------------+
      | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
      | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
      |===============================+======================+======================|
      | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 |
      | N/A 43C P0 37W / 150W | 7373MiB / 7618MiB | 0% Default |
      +-------------------------------+----------------------+----------------------+

      +-----------------------------------------------------------------------------+
      | Processes: GPU Memory |
      | GPU PID Type Process name Usage |
      |=============================================================================|
      | 0 6779 C python 7362MiB |
      +-----------------------------------------------------------------------------+


      What does it mean? Do I get a 'half' of the card? Is Tesla M60 actually two cards sticked together as the ram specification (2x8) suggest?







      amazon-web-services graphics-processing-unit nvidia






      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 1 hour ago









      hanshans

      1062




      1062




      New contributor




      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      hans is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          3














          Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "2"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            hans is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f957832%2fwhat-are-actual-tesla-m60-models-used-by-aws%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3














            Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






            share|improve this answer




























              3














              Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






              share|improve this answer


























                3












                3








                3







                Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.






                share|improve this answer













                Yes, the Tesla M60 is two GPUs 'sticked' together, and each g3s.xlarge or g3.4xlarge instance gets one of the two GPUs.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 53 mins ago









                Michael HamptonMichael Hampton

                171k27314640




                171k27314640






















                    hans is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    hans is a new contributor. Be nice, and check out our Code of Conduct.













                    hans is a new contributor. Be nice, and check out our Code of Conduct.












                    hans is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Server Fault!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f957832%2fwhat-are-actual-tesla-m60-models-used-by-aws%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Paper upload error, “Upload failed: The top margin is 0.715 in on page 3, which is below the required...

                    Emraan Hashmi Filmografia | Linki zewnętrzne | Menu nawigacyjneGulshan GroverGulshan...

                    How can I write this formula?newline and italics added with leqWhy does widehat behave differently if I...